1. Introduction
The key recent developments in the shipbuilding industry revolve around eco-friendliness and the integration of smart technologies, encompassing digital transformation, autonomous navigation, and the establishment of smart shipyards. Within the realm of smart shipyards, research efforts are focused on real-time logistics tracking, production time forecasting, and production automation (
Kim et al., 2023). Specifically, shipyards are increasingly investing in production automation to mitigate the shortage of production workers. However, implementing production automation in shipbuilding presents unique challenges that are distinct from those of other industries. Further, the large scale of production targets and the need to operate in confined spaces complicate equipment investments and deployment. As illustrated in
Fig. 1, welding and preprocessing within a ship’s double-hull structure still largely rely on manual labor or semiautomated equipment.
Nevertheless, advancements in IT and robotics technology are creating new possibilities for utilizing automated equipment inside double-hull blocks. Consequently, research is being conducted to digitize work targets and environments and to develop technologies capable of identifying the work environment. Although some computer-aided design (CAD) data is available for the welding environment, discrepancies between prefabricated products and the CAD data―as well as the presence of joints and various cables installed on-site―must be identified using sensors. Therefore, this study developed a technology utilizing light detection and ranging (LiDAR) to identify the work environment inside a double-hull block.
Section 2 reviews related works, while Section 3 details the algorithm used to detect obstacles within point cloud data collected from the double-hull block. Section 4 outlines the operational method of the system implemented with the developed algorithm, along with relevant test results. Finally, Section 5 presents the conclusions and future research directions.
2. Related Work
Recent advancements in hardware and software performance have enabled the widespread acquisition and active use of point cloud data across various fields. Although research on the utilization of point cloud data has been ongoing for many years, one of the most widely used algorithms―the Iterative Closest Point (ICP) algorithm―was introduced by Besel and McKay in 1992. This algorithm employs a transformation function to identify the closest pairs between two point sets and minimize the distance between them. It involves steps such as finding initial corresponding points, eliminating incorrectly matched points using the transformation function, and measuring and optimizing the error. Subsequent research utilizing this algorithm has validated its effectiveness in analyzing relationships between various point cloud data and model construction (
Dorai et al., 1998;
Johnson and Kang, 1999). Building on this foundational research, along with recent advancements in hardware and software, numerous studies have been conducted to leverage these technologies in the shipbuilding sector, as outlined below.
2.1 Studies on the Application of Point Cloud Data in Shipbuilding and Marine Sectors
Lee et al. (2022a) analyzed the PointNet and RandLA-Net algorithms, which are used for detecting obstacles in 3D point cloud data, and evaluated their applicability in the shipbuilding and maritime industries. The study found that RandLA-Net, which addresses the limitations of PointNet, exhibited strong predictive performance with large-scale data, suggesting its potential for future application in autonomous ship operations and smart shipyard process management. Building on these findings, a method was developed for the automatic detection and removal of scaffolding systems in liquefied natural gas carrier (LNGC) cargo holds using a deep learning-based 3D object detection technique. This demonstrates that point cloud data can enhance quality management and improve productivity in LNGC cargo holds (
Lee et al., 2021).
Lee et al. (2014) conducted a study on the application of augmented reality technology to accurately and efficiently assess the consistency between physical models and design data in the shipbuilding and marine plant sectors. They proposed a method for enhancing consistency and estimating position using the ICP algorithm with point cloud data, demonstrating an improved process for reviewing complex drawings and real-time consistency assessment.
Li (2016) introduced a method for predicting alignment errors in ship blocks using point cloud data and a 3D laser scanner. This method involved calculating discrepancies between actual blocks and CAD models using the ICP algorithm and verifying whether these errors were within acceptable limits. This approach contributed to increased shipyard productivity and reduced construction timelines.
Lee et al. (2016) proposed a method for automatically modeling pipeline connections using LiDAR-scanned point cloud data. This approach estimated pipe geometry through the Hough transform and identified its location to create a 3D pipeline model. Additionally,
Heo et al. (2024) developed a mapping framework that generates 3D spatial information surrounding an unmanned surface vessel by creating a precise 3D spatial model through low-light image correction and point cloud data preprocessing techniques. The framework also employs the generalized ICP (G-ICP) technique to match point cloud data, resulting in 3D spatial information suitable for digital twin applications.
As noted above, extensive research has been conducted on the application of point cloud data in the shipbuilding and marine sectors. However, this study specifically focuses on using point cloud data for obstacle detection in the operation of automated equipment within a double-hull block, an area that has not been previously explored. To detect obstacles, hardware is first required to capture point cloud data, along with software to process the collected data. The development of the hardware component, one of the two core technologies, is discussed in the study by
Lee et al. (2022b). While this study primarily focuses on software development, the following section provides a brief overview of the hardware aspects to support the explanation of the software implementation.
3. Development of Point Cloud Data Acquisition System Using LiDAR Sensor
3.1 Results of the Point Cloud Data Acquisition System Hardware Development
Work inside the double-hull block typically involves blasting, painting, and welding, which generate dust and smoke, making it challenging to operate sensor systems concurrently with these tasks. To address this issue, a portable obstacle detection sensor was developed, allowing workers to detect obstacles within the block before deploying the robot when the work site is still free of dust and smoke. The acquired data can then be incorporated into the robot program.
Fig. 2(a) presents the design drawing of the point cloud data acquisition system, while
Fig. 2(b) shows the image of the fabricated hardware. As illustrated in the design drawing, the acquisition system includes a power switch and a sensing start/stop button. It is equipped with a motor for 360-degree rotation, a communication board for interfacing with the motor and sensors, and a battery, among other components. Preliminary tests were conducted to evaluate the suitability of the selected LiDAR sensor.
Fig. 3(a) shows the panel test setup, and
Fig. 3(b) indicates that while the sensor performed accurately on a white panel, it exhibited a measurement error of approximately 30 mm on a gray panel. Although laser sensors are typically sensitive to object colors, the experimental results were consistent with the LiDAR sensor’s specifications. This level of accuracy was deemed sufficient for detecting the work environment inside the double-hull block. The detailed specifications of the LiDAR sensor used in this study are presented in
Table 1.
3.2 Application Environment and Hardware Operation Method of Point Cloud Data Acquisition System
The double-hull structure, where the point cloud data acquisition system is applied, is fully depicted in
Fig. 1.
Fig. 1(a) shows the midship cross-section of a very large crude oil carrier (VLCC), with the blue box highlighting the double-hull structure.
Fig. 1(b) presents the CAD model of this section, in which the stiffener materials are arranged parallel to each other. Although not shown in
Fig. 1, the same structure is symmetrically positioned on the ceiling, along with the installation of various pipes.
To detect obstacles in the double-hull structure, the fabricated sensor system is installed inside the structure to acquire relevant data.
Fig. 4(a) and 4(b) illustrate the plan view and side view, respectively, of the concept for data acquisition by the sensor system. As the LiDAR sensor used in this system can only collect data from a planar surface, it rotates 360 degrees to gather comprehensive point cloud data within the space.
Fig. 4(c) shows the point cloud data collected inside the double-hull structure using the system.
4. Algorithm Development for Obstacle Detection in Double Hull Blocks
4.1 Fabrication of Test Bench for Generating Obstacle Detection Data
The point cloud data processing system examined in this study is used prior to the installation of production automation equipment, specifically before the deployment of the robot. The prototype robot designed for this stage is depicted in
Fig. 5. The key characteristics of the robot and the definition of obstacles are as follows:
- The robot moves autonomously along the stiffeners of the double-hull block.
- Its driving route is pre-determined based on the obstacle recognition results from the point cloud data processing system, prior to deployment.
- An obstacle is defined as any object that obstructs the robot’s movement between stiffeners on the ceiling and floor.
- The robot navigates around obstacles by maintaining a clearance of over 200 mm to minimize interference as it approaches the workpiece.
To develop and validate an obstacle recognition algorithm that meets the robot’s operating requirements within a double-hull block, a test bench was constructed with various objects, as shown in
Fig. 6. The floor, ceiling, and walls of the blocks were designed to accurately replicate the on-site environment. In particular, the stiffener materials for the floor and ceiling―critical for implementing the algorithm― were fabricated according to the exact dimensions specified in the CAD model of the actual ship.
Fig. 6 presents a layout diagram featuring objects such as pipes, robots, desks, and blasting work units. While the robot and desk are not typically encountered during standard obstacle detection, they were intentionally included in the setup to facilitate point cloud data acquisition and testing.
Fig. 6(b) and 6(c) display the object arrangement within the test bench from the front and back perspectives, respectively, illustrating that stiffeners are positioned at both the upper and lower sections of the block.
The point cloud data acquired from the test bench is shown in
Fig. 4(c), while the visualizations of the acquired data corresponding to camera A and camera B are depicted in
Figs. 7(a) and 7(b), respectively. Point cloud data was collected using the constructed test bench, and obstacle recognition technology was developed.
4.2 Development of Technology for Detecting Obstacles in Double Hull Block Using Point Cloud Data
Multiple steps are undertaken to identify obstacles in the collected point cloud data. Initially, the walls of the double hull are detected to establish the reference coordinates and workspace for the operation of automated equipment. For wall detection, the random sample consensus (RANSAC) algorithm is utilized. First, a minimum of three points are randomly selected from the point cloud data to estimate the plane model. Subsequently, using this model, points that fall within a specified distance from the plane are identified from the entire set of data points, forming a consensus set. This process is repeated for a predefined maximum number of iterations. In each iteration, a candidate plane is established based on the size of the consensus set, and ultimately, the model with the largest consensus set is chosen as the optimal model for recognizing the plane.
Fig. 8(b) displays the recognized plane, while
Fig. 8(a) shows that the ceiling and floor surfaces have already been identified as planes by the same algorithm, highlighted in red. Once the plane is detected, the interior of the block, identified as the plane, is segmented into small cell units, and cells containing visible points are then extracted. The cells derived from the point cloud data are defined as voxels, represented as small red cubes in
Fig. 8(c). It can be seen that point cloud data identified as walls are excluded from the voxelization process to avoid redundancy. After voxelization, adjacent voxels are grouped together and identified as a single object. However, the longitudinal stiffeners (Longi.) located above and below the block are not considered obstacles but rather work targets for welding or blasting. Therefore, they are excluded from obstacle detection either through CAD information or by manually providing the relevant details. In this study, parameters such as the spacing between stiffeners (Longi. space) and their height were manually input. Apart from the stiffeners, no other structures are part of the work targets and are consequently identified as obstacles because they hinder the robot’s operations.
Fig. 8(d) shows the identified obstacles, represented as blue rectangular parallelepipeds. Several options were reviewed for handling the small blue boxes representing detected structures. The review concluded that if the number of point cloud data points within a blue box falls below a specific threshold, it is classified as noise and disregarded. Moreover, the robot’s cables and various supports were segmented into multiple small square boxes based on their dimensions. However, because these elements were identified as obstacles that should be excluded from the robot’s work area, no additional post-processing was performed on these segmented components.
5. Implementation and Testing of the Obstacle Detection System
5.1 Implementation of the Obstacle Detection System
A program was developed based on the aforementioned algorithm to visualize the sensing data and identify obstacles within the double hull block. This program was implemented using C++ and MFC on the Windows 11 operating system and is compatible with any PC running Windows.
Fig. 9 presents a screenshot of the final version of the obstacle detection software. Below is a description of each button’s function and a brief overview of the software’s operating algorithm.
① Once the sensing data is loaded, it is displayed on the screen with the initially read points visualized in gray. Refer to Fig. 9(a).
② This button analyzes the loaded sensing data to detect the wall surface of the block and set the block’s origin. Pressing it opens a pop-up window, as shown in Fig. 9(b).
③ This pop-up window allows users to enter the height and spacing of the upper and lower stiffeners of the sensed block.
④ After inputting the height and spacing of the stiffeners and clicking the confirm button, the block’s wall surface is detected, and its origin is established. If the wall recognition is successful, the block initially visualized in gray changes to green. The object identified between the stiffeners is then considered an obstacle and registered as a box-shaped obstacle encompassing its outer perimeter. Refer to Fig. 9(c).
⑤ Once the obstacle recognition process is complete, the final recognized obstacle detection results can be exported as an XML file by selecting the “File > XML Output” option from the menu.
5.2 Testing the Obstacle Detection System
The performance of the aforementioned system was validated using a test bench.
Fig. 10(a) shows the image of acquiring point cloud data from the test bench. The acquired point cloud data is depicted in
Fig. 10(b), where it indicates that the blasting robot, the hose connected to the robot, the hose feeder, and the table have been detected.
Fig. 10(c) shows that all these objects are recognized as obstacles. From the robot’s perspective, each item inside the double hull block is treated as an obstacle, meaning that the distinct importance of each object is not acknowledged.
To evaluate the accuracy of the detection system, two tests were conducted to measure the position of each object and compare it with the detection results. The outcomes of these tests are as follows.
-
(1) List of the used obstacles
- PIPE1: Diameter: 165 mm, Length: 2970 mm
- PIPE2: Diameter: 115 mm, Length: 1000 mm
Robot1, Robot2, and the hose feeder: Because obtaining an accurate measurement of their size was challenging due to the presence of the cable, the dimensions (the range within the block) were measured directly on-site.
(2) Results of measurements from the obstacle detection sensor
- Case 1: Measured after installing Robot1, Robot2, Hose feeder, and PIPE2
- Case 2: Measured after installing PIPE1 and PIPE2
Figs. 11(a) and 11(b) illustrate the actual and measured positions of the objects placed within the test bench, respectively. The measured positions obtained from the sensor are indicated in parentheses. The results of these two tests show that the red measurement value in
Fig. 11(a), which has the largest error, is confirmed to be within 100 mm. This level of error is considered acceptable for defining the robot’s working range.
6. Conclusion and Future Research
This study developed a point cloud data acquisition and processing system using a LiDAR sensor to automate the internal working environment of a double hull block. An algorithm was proposed for effective obstacle recognition within the workspace and its performance was validated through experimental testing. The sensor system demonstrated accurate obstacle detection in the complex environment of the double hull, establishing it as a foundational technology for automating production processes in shipyards. Future research will focus on implementing deep learning-based object recognition to improve detection performance in more complex structures and enhancing the system’s stability and scalability for real-time data processing and integrated sensor operations across diverse environments. If these studies are successfully carried out, they will significantly contribute to the establishment of smart shipyards.
Conflict of Interest
No potential conflict of interest relevant to this article was reported.
Funding
This work was supported by a Research Grant of Pukyong National University (2023).
Fig. 1.
Double hull structure: (a) Midship section of VLCC; (b) CAD model of double hull structure
Fig. 2.
Hardware system of sensor system: (a) Design of the point cloud data acquisition system; (b) Picture of hardware
Fig. 3.
Verification of LiDAR sensor: (a) Panel test for verification of LiDAR sensor; (b) Panel test results for verification of LiDAR sensor
Fig. 4.
Concept of a sensor system operation method: (a) Plane view; (b) Side view; (c) Point cloud data
Fig. 5.
Prototype of production automation robot
Fig. 6.
Test bench: (a) Placement of objects; (b) Picture taken by camera A; (c) Picture taken by camera B
Fig. 7.
Point cloud data: (a) Corresponding point cloud for camera A; (b) Corresponding point cloud for camera B
Fig. 8.
Obstacle recognition algorithm: (a) Point cloud data; (b) Plane recognition; (c) Voxelization; (d) Obstacle detection
Fig. 9.
Obstacle detection software: (a) Loading point clouds; (b) Entering stiffener information; (c) Detecting obstacles
Fig. 10.
Obstacle detection test: (a) Picture of the test bench; (b) Point cloud; (c) Obstacle detection results
Fig. 11.
Obstacle detection test results: (a) Test results of case 1; (b) Test results of case 2
Table 1.
Specification of LiDAR sensor
Specification |
Value |
Specification |
Value |
Model |
UHG-08LX |
Accuracy |
100–1,000 mm: ±30 mm |
Detection distance |
100–8,000 mm |
1,000–8,000 mm: ±3% of measurement |
Scan angle |
270° |
Scan time |
67 msec/scan |
Resolution |
1 mm |
Interface |
USB 2.0 |
Angular resolution |
0.36° |
Weight |
500 g |
Dimension |
88 × 83 × 83 mm |
|
|
References
Besl, P. J., & McKay, N. D. (1992). A method for registration of 3-D shapes.
IEEE Transactions on Pattern Analysis and Machine Intelligence,
14(2), 239-256.
https://doi.org/10.1109/34.121791
Dorai, C., Wang, G., Jain, A. K., & Mercer, C. (1998). Registration and integration of multiple object views for 3D model construction.
IEEE Transactions on Pattern Analysis and Machine Intelligence,
20(1), 83-89.
https://doi.org/10.1109/34.655652
Heo, S. H., Kang, M. J., Choi, J. W., & Park, J. H. (2024). Design of a mapping framework on image correction and point cloud data for spatial reconstruction of digital twin with an autonomous surface vehicle.
Journal of the Society of Naval Architects of Korea,
61(3), 143-151.
https://doi.org/10.3744/SNAK.2024.61.3.143
Kim, J., Lee, H., & Cho, E. December. (2023). The present and future of smart yards and offshore plants. The Korean Society of Ocean Engineers News Letter, 10(3), 18-21.
Lee, D. K., Ji, S. H., & Park, B. Y. (2021). Object detection and post-processing of LNGC CCS scaffolding system using 3D point cloud based on deep learning.
Journal of the Society of Naval Architects of Korea,
58(5), 303-313.
https://doi.org/10.3744/SNAK.2021.58.5.303
Lee, D. K., Ji, S. H., & Park, B. Y. (2022a). PointNet and RandLA-Net algorithms for object detection using 3D point clouds.
Journal of the Society of Naval Architects of Korea,
59(5), 330-337.
https://doi.org/10.3744/SNAK.2022.59.5.330
Lee, J. M., Lee, K .H., Kim, D. S., & Li, R. (2014). Point cloud data based augmented reality for rapid discrepancy check. Proceedings of the Society of CAD/CAM Engineers Conference, 225-228.
Lee, J. W., Ku, N. K., & Lee, J. Y. (2022b). Development of point cloud data acquisition system prototype for sensing working environment in double hull block.
Korean Journal of Computational Design and Engineering,
27(4), 375-381.
https://doi.org/10.7315/CDE.2022.375
Lee, J. W., Patil, A. K., Holi, P., & Chai, Y. H. (2016). A study on automatic modeling of pipelines connection using point cloud.
Korean Journal of Computational Design and Engineering,
21(3), 341-352.
https://doi.org/10.7315/CDE.2016.341
Li, R., Lee, K. H., Lee, J. M., Nam, B. W., & Kim, D. S. (2016). A study on matching method of hull blocks based on point clouds for error prediction.
Journal of the Computational Structural Engineering Institute of Korea,
29(2), 123-130.
https://doi.org/10.7734/COSEIK.2016.29.2.123