Development and Validation of a Point Cloud Data Processing Algorithm for Obstacle Recognition in Double Hull Block
Article information
Abstract
Shipyards have recently been experiencing severe labor shortages, prompting increased adoption of production automation systems to mitigate this issue. This paper introduces a point cloud data acquisition and processing system designed to support automation operations within double-hull block environments. The acquisition system utilizes LiDAR sensors and is built as a portable device capable of conducting 360-degree scans inside double-hull blocks. The processing system integrates the RANSAC algorithm for plane recognition and a voxelization algorithm for object detection, enabling accurate identification of obstacles within the double-hull block. To validate the system, a full-scale test bench was constructed to replicate actual working conditions. Experimental results indicated that the system could detect the positions of various obstacles within the test bench with an accuracy of up to 100 mm, which is sufficient for the implementation of automation systems. The findings from this research are expected to facilitate the adoption of production automation in shipyards, enhancing productivity and addressing labor shortages in the industry.
1. Introduction
The key recent developments in the shipbuilding industry revolve around eco-friendliness and the integration of smart technologies, encompassing digital transformation, autonomous navigation, and the establishment of smart shipyards. Within the realm of smart shipyards, research efforts are focused on real-time logistics tracking, production time forecasting, and production automation (Kim et al., 2023). Specifically, shipyards are increasingly investing in production automation to mitigate the shortage of production workers. However, implementing production automation in shipbuilding presents unique challenges that are distinct from those of other industries. Further, the large scale of production targets and the need to operate in confined spaces complicate equipment investments and deployment. As illustrated in Fig. 1, welding and preprocessing within a ship’s double-hull structure still largely rely on manual labor or semiautomated equipment.
Nevertheless, advancements in IT and robotics technology are creating new possibilities for utilizing automated equipment inside double-hull blocks. Consequently, research is being conducted to digitize work targets and environments and to develop technologies capable of identifying the work environment. Although some computer-aided design (CAD) data is available for the welding environment, discrepancies between prefabricated products and the CAD data―as well as the presence of joints and various cables installed on-site―must be identified using sensors. Therefore, this study developed a technology utilizing light detection and ranging (LiDAR) to identify the work environment inside a double-hull block.
Section 2 reviews related works, while Section 3 details the algorithm used to detect obstacles within point cloud data collected from the double-hull block. Section 4 outlines the operational method of the system implemented with the developed algorithm, along with relevant test results. Finally, Section 5 presents the conclusions and future research directions.
2. Related Work
Recent advancements in hardware and software performance have enabled the widespread acquisition and active use of point cloud data across various fields. Although research on the utilization of point cloud data has been ongoing for many years, one of the most widely used algorithms―the Iterative Closest Point (ICP) algorithm―was introduced by Besel and McKay in 1992. This algorithm employs a transformation function to identify the closest pairs between two point sets and minimize the distance between them. It involves steps such as finding initial corresponding points, eliminating incorrectly matched points using the transformation function, and measuring and optimizing the error. Subsequent research utilizing this algorithm has validated its effectiveness in analyzing relationships between various point cloud data and model construction (Dorai et al., 1998; Johnson and Kang, 1999). Building on this foundational research, along with recent advancements in hardware and software, numerous studies have been conducted to leverage these technologies in the shipbuilding sector, as outlined below.
2.1 Studies on the Application of Point Cloud Data in Shipbuilding and Marine Sectors
Lee et al. (2022a) analyzed the PointNet and RandLA-Net algorithms, which are used for detecting obstacles in 3D point cloud data, and evaluated their applicability in the shipbuilding and maritime industries. The study found that RandLA-Net, which addresses the limitations of PointNet, exhibited strong predictive performance with large-scale data, suggesting its potential for future application in autonomous ship operations and smart shipyard process management. Building on these findings, a method was developed for the automatic detection and removal of scaffolding systems in liquefied natural gas carrier (LNGC) cargo holds using a deep learning-based 3D object detection technique. This demonstrates that point cloud data can enhance quality management and improve productivity in LNGC cargo holds (Lee et al., 2021).
Lee et al. (2014) conducted a study on the application of augmented reality technology to accurately and efficiently assess the consistency between physical models and design data in the shipbuilding and marine plant sectors. They proposed a method for enhancing consistency and estimating position using the ICP algorithm with point cloud data, demonstrating an improved process for reviewing complex drawings and real-time consistency assessment.
Li (2016) introduced a method for predicting alignment errors in ship blocks using point cloud data and a 3D laser scanner. This method involved calculating discrepancies between actual blocks and CAD models using the ICP algorithm and verifying whether these errors were within acceptable limits. This approach contributed to increased shipyard productivity and reduced construction timelines.
Lee et al. (2016) proposed a method for automatically modeling pipeline connections using LiDAR-scanned point cloud data. This approach estimated pipe geometry through the Hough transform and identified its location to create a 3D pipeline model. Additionally, Heo et al. (2024) developed a mapping framework that generates 3D spatial information surrounding an unmanned surface vessel by creating a precise 3D spatial model through low-light image correction and point cloud data preprocessing techniques. The framework also employs the generalized ICP (G-ICP) technique to match point cloud data, resulting in 3D spatial information suitable for digital twin applications.
As noted above, extensive research has been conducted on the application of point cloud data in the shipbuilding and marine sectors. However, this study specifically focuses on using point cloud data for obstacle detection in the operation of automated equipment within a double-hull block, an area that has not been previously explored. To detect obstacles, hardware is first required to capture point cloud data, along with software to process the collected data. The development of the hardware component, one of the two core technologies, is discussed in the study by Lee et al. (2022b). While this study primarily focuses on software development, the following section provides a brief overview of the hardware aspects to support the explanation of the software implementation.
3. Development of Point Cloud Data Acquisition System Using LiDAR Sensor
3.1 Results of the Point Cloud Data Acquisition System Hardware Development
Work inside the double-hull block typically involves blasting, painting, and welding, which generate dust and smoke, making it challenging to operate sensor systems concurrently with these tasks. To address this issue, a portable obstacle detection sensor was developed, allowing workers to detect obstacles within the block before deploying the robot when the work site is still free of dust and smoke. The acquired data can then be incorporated into the robot program. Fig. 2(a) presents the design drawing of the point cloud data acquisition system, while Fig. 2(b) shows the image of the fabricated hardware. As illustrated in the design drawing, the acquisition system includes a power switch and a sensing start/stop button. It is equipped with a motor for 360-degree rotation, a communication board for interfacing with the motor and sensors, and a battery, among other components. Preliminary tests were conducted to evaluate the suitability of the selected LiDAR sensor. Fig. 3(a) shows the panel test setup, and Fig. 3(b) indicates that while the sensor performed accurately on a white panel, it exhibited a measurement error of approximately 30 mm on a gray panel. Although laser sensors are typically sensitive to object colors, the experimental results were consistent with the LiDAR sensor’s specifications. This level of accuracy was deemed sufficient for detecting the work environment inside the double-hull block. The detailed specifications of the LiDAR sensor used in this study are presented in Table 1.
3.2 Application Environment and Hardware Operation Method of Point Cloud Data Acquisition System
The double-hull structure, where the point cloud data acquisition system is applied, is fully depicted in Fig. 1. Fig. 1(a) shows the midship cross-section of a very large crude oil carrier (VLCC), with the blue box highlighting the double-hull structure. Fig. 1(b) presents the CAD model of this section, in which the stiffener materials are arranged parallel to each other. Although not shown in Fig. 1, the same structure is symmetrically positioned on the ceiling, along with the installation of various pipes.
To detect obstacles in the double-hull structure, the fabricated sensor system is installed inside the structure to acquire relevant data. Fig. 4(a) and 4(b) illustrate the plan view and side view, respectively, of the concept for data acquisition by the sensor system. As the LiDAR sensor used in this system can only collect data from a planar surface, it rotates 360 degrees to gather comprehensive point cloud data within the space. Fig. 4(c) shows the point cloud data collected inside the double-hull structure using the system.
4. Algorithm Development for Obstacle Detection in Double Hull Blocks
4.1 Fabrication of Test Bench for Generating Obstacle Detection Data
The point cloud data processing system examined in this study is used prior to the installation of production automation equipment, specifically before the deployment of the robot. The prototype robot designed for this stage is depicted in Fig. 5. The key characteristics of the robot and the definition of obstacles are as follows:
- The robot moves autonomously along the stiffeners of the double-hull block.
- Its driving route is pre-determined based on the obstacle recognition results from the point cloud data processing system, prior to deployment.
- An obstacle is defined as any object that obstructs the robot’s movement between stiffeners on the ceiling and floor.
- The robot navigates around obstacles by maintaining a clearance of over 200 mm to minimize interference as it approaches the workpiece.
To develop and validate an obstacle recognition algorithm that meets the robot’s operating requirements within a double-hull block, a test bench was constructed with various objects, as shown in Fig. 6. The floor, ceiling, and walls of the blocks were designed to accurately replicate the on-site environment. In particular, the stiffener materials for the floor and ceiling―critical for implementing the algorithm― were fabricated according to the exact dimensions specified in the CAD model of the actual ship. Fig. 6 presents a layout diagram featuring objects such as pipes, robots, desks, and blasting work units. While the robot and desk are not typically encountered during standard obstacle detection, they were intentionally included in the setup to facilitate point cloud data acquisition and testing. Fig. 6(b) and 6(c) display the object arrangement within the test bench from the front and back perspectives, respectively, illustrating that stiffeners are positioned at both the upper and lower sections of the block.
The point cloud data acquired from the test bench is shown in Fig. 4(c), while the visualizations of the acquired data corresponding to camera A and camera B are depicted in Figs. 7(a) and 7(b), respectively. Point cloud data was collected using the constructed test bench, and obstacle recognition technology was developed.
4.2 Development of Technology for Detecting Obstacles in Double Hull Block Using Point Cloud Data
Multiple steps are undertaken to identify obstacles in the collected point cloud data. Initially, the walls of the double hull are detected to establish the reference coordinates and workspace for the operation of automated equipment. For wall detection, the random sample consensus (RANSAC) algorithm is utilized. First, a minimum of three points are randomly selected from the point cloud data to estimate the plane model. Subsequently, using this model, points that fall within a specified distance from the plane are identified from the entire set of data points, forming a consensus set. This process is repeated for a predefined maximum number of iterations. In each iteration, a candidate plane is established based on the size of the consensus set, and ultimately, the model with the largest consensus set is chosen as the optimal model for recognizing the plane. Fig. 8(b) displays the recognized plane, while Fig. 8(a) shows that the ceiling and floor surfaces have already been identified as planes by the same algorithm, highlighted in red. Once the plane is detected, the interior of the block, identified as the plane, is segmented into small cell units, and cells containing visible points are then extracted. The cells derived from the point cloud data are defined as voxels, represented as small red cubes in Fig. 8(c). It can be seen that point cloud data identified as walls are excluded from the voxelization process to avoid redundancy. After voxelization, adjacent voxels are grouped together and identified as a single object. However, the longitudinal stiffeners (Longi.) located above and below the block are not considered obstacles but rather work targets for welding or blasting. Therefore, they are excluded from obstacle detection either through CAD information or by manually providing the relevant details. In this study, parameters such as the spacing between stiffeners (Longi. space) and their height were manually input. Apart from the stiffeners, no other structures are part of the work targets and are consequently identified as obstacles because they hinder the robot’s operations. Fig. 8(d) shows the identified obstacles, represented as blue rectangular parallelepipeds. Several options were reviewed for handling the small blue boxes representing detected structures. The review concluded that if the number of point cloud data points within a blue box falls below a specific threshold, it is classified as noise and disregarded. Moreover, the robot’s cables and various supports were segmented into multiple small square boxes based on their dimensions. However, because these elements were identified as obstacles that should be excluded from the robot’s work area, no additional post-processing was performed on these segmented components.
5. Implementation and Testing of the Obstacle Detection System
5.1 Implementation of the Obstacle Detection System
A program was developed based on the aforementioned algorithm to visualize the sensing data and identify obstacles within the double hull block. This program was implemented using C++ and MFC on the Windows 11 operating system and is compatible with any PC running Windows. Fig. 9 presents a screenshot of the final version of the obstacle detection software. Below is a description of each button’s function and a brief overview of the software’s operating algorithm.
① Once the sensing data is loaded, it is displayed on the screen with the initially read points visualized in gray. Refer to Fig. 9(a).
② This button analyzes the loaded sensing data to detect the wall surface of the block and set the block’s origin. Pressing it opens a pop-up window, as shown in Fig. 9(b).
③ This pop-up window allows users to enter the height and spacing of the upper and lower stiffeners of the sensed block.
④ After inputting the height and spacing of the stiffeners and clicking the confirm button, the block’s wall surface is detected, and its origin is established. If the wall recognition is successful, the block initially visualized in gray changes to green. The object identified between the stiffeners is then considered an obstacle and registered as a box-shaped obstacle encompassing its outer perimeter. Refer to Fig. 9(c).
⑤ Once the obstacle recognition process is complete, the final recognized obstacle detection results can be exported as an XML file by selecting the “File > XML Output” option from the menu.
5.2 Testing the Obstacle Detection System
The performance of the aforementioned system was validated using a test bench. Fig. 10(a) shows the image of acquiring point cloud data from the test bench. The acquired point cloud data is depicted in Fig. 10(b), where it indicates that the blasting robot, the hose connected to the robot, the hose feeder, and the table have been detected. Fig. 10(c) shows that all these objects are recognized as obstacles. From the robot’s perspective, each item inside the double hull block is treated as an obstacle, meaning that the distinct importance of each object is not acknowledged.
To evaluate the accuracy of the detection system, two tests were conducted to measure the position of each object and compare it with the detection results. The outcomes of these tests are as follows.
-
(1) List of the used obstacles
- PIPE1: Diameter: 165 mm, Length: 2970 mm
- PIPE2: Diameter: 115 mm, Length: 1000 mm
Robot1, Robot2, and the hose feeder: Because obtaining an accurate measurement of their size was challenging due to the presence of the cable, the dimensions (the range within the block) were measured directly on-site.
(2) Results of measurements from the obstacle detection sensor
- Case 1: Measured after installing Robot1, Robot2, Hose feeder, and PIPE2
- Case 2: Measured after installing PIPE1 and PIPE2
Figs. 11(a) and 11(b) illustrate the actual and measured positions of the objects placed within the test bench, respectively. The measured positions obtained from the sensor are indicated in parentheses. The results of these two tests show that the red measurement value in Fig. 11(a), which has the largest error, is confirmed to be within 100 mm. This level of error is considered acceptable for defining the robot’s working range.
6. Conclusion and Future Research
This study developed a point cloud data acquisition and processing system using a LiDAR sensor to automate the internal working environment of a double hull block. An algorithm was proposed for effective obstacle recognition within the workspace and its performance was validated through experimental testing. The sensor system demonstrated accurate obstacle detection in the complex environment of the double hull, establishing it as a foundational technology for automating production processes in shipyards. Future research will focus on implementing deep learning-based object recognition to improve detection performance in more complex structures and enhancing the system’s stability and scalability for real-time data processing and integrated sensor operations across diverse environments. If these studies are successfully carried out, they will significantly contribute to the establishment of smart shipyards.
Notes
No potential conflict of interest relevant to this article was reported.
This work was supported by a Research Grant of Pukyong National University (2023).