Fundamental Research for Video-Integrated Collision Prediction and Fall Detection System to Support Navigation Safety of Vessels

Article information

J. Ocean Eng. Technol. 2021;35(1):91-97
Publication date (electronic) : 2021 February 25
doi : https://doi.org/10.26748/KSOE.2020.069
1Senior Researcher, Ocean ICT & Advanced Materials Technology Research Division, RIMS, Busan, Korea
2Researcher, Ocean ICT & Advanced Materials Technology Research Division, RIMS, Busan, Korea
3Technical Director, Research Institute of Ocean and Ship ICT Convergence, MEIPA, Busan, Korea
Corresponding author Hun-Gyu Hwang: +82-51-974-5572, hghwang@rims.re.kr
It is noted that this paper is revised edition based on proceedings of KIICE 2020 in Gyeongju.
Received 2020 November 27; Revised 2021 January 21; Accepted 2021 January 21.

Abstract

Marine accidents caused by ships have brought about economic and social losses as well as human casualties. Most of these accidents are caused by small and medium-sized ships and are due to their poor conditions and insufficient equipment compared with larger vessels. Measures are quickly needed to improve the conditions. This paper discusses a video-integrated collision prediction and fall detection system to support the safe navigation of small- and medium-sized ships. The system predicts the collision of ships and detects falls by crew members using the CCTV, displays the analyzed integrated information using automatic identification system (AIS) messages, and provides alerts for the risks identified. The design consists of an object recognition algorithm, interface module, integrated display module, collision prediction and fall detection module, and an alarm management module. For the basic research, we implemented a deep learning algorithm to recognize the ship and crew from images, and an interface module to manage messages from AIS. To verify the implemented algorithm, we conducted tests using 120 images. Object recognition performance is calculated as mAP by comparing the pre-defined object with the object recognized through the algorithms. As results, the object recognition performance of the ship and the crew were approximately 50.44 mAP and 46.76 mAP each. The interface module showed that messages from the installed AIS were accurately converted according to the international standard. Therefore, we implemented an object recognition algorithm and interface module in the designed collision prediction and fall detection system and validated their usability with testing.

1. Introduction

South Korea has a large ocean area, complex coastlines, and many small and large islands. Because of these geographic characteristics, ships are widely used as a method of commercial transportation (Kim et al., 2020). Maritime accidents that occur during the navigation of such vessels result in a high number of human casualties and substantial property damage. If oil is spilled from the vessel, the environment also becomes polluted, leading to greater societal loss. The current management system and follow-up measures for preventing such accidents are limited, and the actual effect of these measures has been insignificant. As a result, the frequency of maritime accidents greatly varies each year. In addition, the activity boundary of fishing boats and the demand for tourism in island regions have risen, resulting in an increased risk of accidents. Moreover, the country is highly dependent on marine transport, yet it is not possible to transport goods by land during the import and export process due to the geographical environments. Hence, most goods are transported by maritime vessels. As a result, maritime traffic and congestion are rising rapidly. Meanwhile, small and medium-sized ships, such as fishing boats and coastal passenger vessels, make up a large proportion of the registered vessels in South Korea, and many maritime accidents are caused by these ships (Jung, 2013; Park et al., 2018). Most ship accidents are caused by marine navigational mistakes, such as lack of vigilance, violation of navigation laws, and failure to comply with work safety regulations. Even a large proportion of human casualties are attributed to overboard accidents caused by loss of footing or carelessness. To prevent maritime accidents caused by human factors, such as ships colliding or running aground and overboard falls, it is necessary to adopt a system that promptly recognizes and predicts risk factors by linking the navigation communication equipment with CCTV (Closed-circuit television) camera images and alerting navigators with warnings based on the situation and the safety information related to the sea route.

Therefore, to enhance the safety and navigational reliability of small and medium-sized vessels, this paper presents a fundamental study of a video-integrated collision prediction and fall detection system that combines data from an automatic identification system (AIS), which is essential navigation communication equipment for ships, with images from a CCTV camera. Section 2 of this paper analyzes the background and needs, including related studies conducted in South Korea and overseas. Section 3 describes the composition of the system being developed and the design of each component. Section 4 discusses the implementation of the algorithm and interface modules, and Section 5 deals with the content related to testing and performance verification. Section 6 concludes the paper with the conclusion and suggestions for future studies.

2. Background and Literature Review

2.1 Background and Needs

According to statistics, a total of 2,971 maritime accidents occurred in 2019. Although both major and minor causes played a role in the maritime accidents, human mistakes accounted for approximately 80% of the accidents (MOF, 2020). These human mistakes can be attributed to a variety of factors, such as the behavior, habits, and mindset of operators, the internal and external environment, customary practices and the culture of the organization, and relationships with co-workers (Kim et al., 2011). It is difficult to accurately identify and eliminate complicated human errors, and the conventional methods of re-education and training have limitations. Furthermore, more than 84% of the accidents were caused by small and medium-sized ships. This is because, in comparison with large vessels, these ships are relatively vulnerable to the marine environment, their navigators are older, they have inadequate communication equipments for navigation, or they have insufficient information. For overboard accidents, a timely response in rescuing people who have fallen overboard is important. However, man overboard (MOB) equipment operates after a person falls overboard, so it cannot prevent or predict such accidents.

2.2 Literature Reviews

Domestic studies that utilize images to aid maritime safety include the development of an algorithm that uses computer vision to measure maritime traffic and to confirm collision with marine structures; the implementation of an augmented reality-based image analysis module that visualizes the navigation information to enable the navigator to simultaneously check both outside of the ship’s bridge and the navigation information; the development of a camera-AIS-linked active monitoring and approach alarm system to protect marine buoys; and the study of a search and rescue method using cameras and drones for overboard accidents (Joo et al., 2011; Lee et al., 2013; Kim and Jeong, 2017; Hwang et al., 2018). However, the objectives of these studies were to prevent other ships from approaching the vessel and to help in incidents of maritime distress. Hence, their systems performed different functions than the system developed in this study. Moreover, these systems were not integrated with the equipment installed on the vessel, so they had to be operated separately. Because the number of pieces of equipment used in navigation is thus increased, more burden is placed on the navigator.

In other countries, research on automated maritime surveillance using CCTV cameras is actively ongoing to cope with increased maritime traffic. One study was conducted to develop a system to track vessels and provide visualized information using the identification information obtained by combining the data collected by the vessel traffic service (VTS) system with CCTV images (Bloisi et al., 2011; Bloisi et al., 2016; Xiao et al., 2018). Because the systems developed in these studies operated on land, they mainly supported control activities and could not recognize or respond to dangerous situations on the vessel.

Therefore, the system developed in this study recognizes vessels in the CCTV camera images using artificial intelligence and displays the vessel’s information in an integrated manner by linking with the navigation communication equipment. In addition, because it recognizes people, the system can detect an overboard fall, which could occur while the crew members are working. It also provides the navigator with warnings in each phase to raise awareness in response to dangerous situations such as collision and drowning.

3. Design of the System

3.1 Automatic Identification System (AIS)

When the vessel is navigating, it uses various types of navigation communication equipment to identify maritime traffic conditions and consider them in operation and navigation plans to prevent maritime accidents. A radar emits electromagnetic waves out to sea, and then it receives, analyzes, and displays the signals reflected from the surface of objects. Because radars are expensive, they are installed and operated predominantly in large vessels. However, their reception sensitivity is degraded under inclement weather conditions or when the waves are high. Furthermore, radars can only check for the existence of other ships. They cannot identify other information, such as the name and type of the ship and its cargo, without installing additional equipment. Radars have a disadvantage in that it is difficult to detect a target behind a bend or an obstacle. To remedy such issues, the International Maritime Organization (IMO) recommends installing AIS on vessels.

The AIS is a piece of navigation communication equipment used to exchange a vessel’s information with another vessel, as well as between vessel and land. According to the status or demand of the vessel, it can transmit wireless data over the VHF (Very high frequency) frequency band. The AIS also provides maritime mobile service identities (MMSI), static information (the IMO identification number, name of the vessel, vessel specifications, etc.), and dynamic information (navigation status, speed, location, and course of the vessel). Using the AIS, the VTS center manages the passage of vessels and supports vessels when they navigate coastal waters. Fig. 1 shows the AIS receiver installed for this study.

Fig. 1

AIS Receiver used by the proposed system

3.2 Composition of the System

As shown in Fig. 2, the video-integrated collision prediction and alarm system, which combines the AIS with CCTV camera images, consists of the following six components.

  1. (1) The ship recognition algorithm obtains images of the visual scene in front of the ship from a camera facing the ship’s bow. Using artificial intelligence, the ship recognition algorithm recognizes other vessels in these images, such as large, medium, and small ships and fishing boats, that could be a risk factor during navigation.

  2. (2) The crew recognition algorithm obtains images of the visual scene behind the ship using a camera facing the ship’s stern. Using artificial intelligence, the crew recognition algorithm recognizes the crew in these images to identify human accidents due to falls that could occur the carelessness of people onboard the ship. The term crew means people who are operating or navigating the ship or performing desk jobs. However, everyone onboard the ship, including the crew members, passengers, and people living on the islands, is exposed to the risk of overboard falls. Therefore, in this study, the term crew is used to refer to everyone onboard the ship.

  3. (3) The interface module analyzes the AIS data received in the IEC 61162-1/2(NMEA 0183) sentence format. Based on the analysis, the interface module converts and processes the location of the host ship and the static and dynamic information about other vessels.

  4. (4) If the ship recognition algorithm recognizes another vessel, and that vessel’s location matches the location processed by the interface module, the integrated display module displays the name, speed, and location of that vessel in text format on the camera image layer. For any vessel that was recognized as a marine obstacle but whose accurate location information could not be acquired because the AIS was not installed, only the distance between that vessel and the host ship is displayed.

  5. (5) The collision prediction and fall detection module calculates the distance between the marine obstacle recognized by the ship recognition module and the interface module. It also calculates the location of the predicted collision. In addition, this module monitors dangerous situations, such as when a crew member who is recognized is approaching a set boundary or falls over the guardrail.

  6. (6) The alarm management module receives the data analyzed by the collision prediction and fall detection module and triggers the navigator alarm according to the set risk stages.

Fig. 2

Conceptual diagram of the designed system

4. Implementation of the Algorithm and Interface Module

Prior to developing an overall video-integrated collision prediction and fall detection system to support the navigation safety of small and medium-sized ships, we implemented the ship and crew recognition algorithm and the AIS data interface module.

4.1 Implementation of the Ship and Crew Recognition Algorithm

Past studies on object recognition identified an object by evaluating the features of the object. Afterward, the object was divided into a grid. The feature information was constructed for each area, and machine learning was used for object recognition. However, convolutional neural network (CNN)-based deep learning has emerged, which shows performance superior to previous methods. Hence, the object recognition rate has improved (Lee et al., 2018). Therefore, an algorithm was implemented to recognize ships and crew in the images obtained based on a deep learning model appropriate for real-time image recognition. The you only look once (YOLO) v2, which is widely used in related fields, was used as the deep learning model. This method sections the input image into a grid and finds the object to be recognized in each cell. The network structure diagram for this method is shown in Fig. 3.

Fig. 3

YOLOv2 network structure (Redmon et al., 2016)

The 2014 COCO datase (COCO, 2014) includes a variety of images, such as vessels, people, cars, and animals. It also includes the coordinates, width, and height information of the objects. The 2014 COCO dataset was selected as the learning data, and approximately 82,000 images were used in training. Based on the observation that the input images were in color, arbitrary lighting was added to adjust the brightness. The images were preprocessed by adjusting the brightness to improve the performance and effects of the deep learning model during the learning process. The algorithm was developed as a C++ library to make it easy to expand it to application software and apply it to an operating system. NVIDIA CUDA was used to process the images at high speed.

4.2 Implementation of the Interface Module

The AIS interface software module is needed to map and display the information about the ship shown on the CCTV camera image. Out of a total of 27 messages (number 1 through number 27), message number 5 contains static and navigation-related data for Class A equipment. Message numbers 1 through 3 are the dynamic and location reporting data of Class A equipment. The AIS interface software module was implemented to convert and process these messages. Here, Class B equipment is used by small vessels in South Korea. However, fishing boats make up a large proportion of these vessels. Instead of Class B equipment, fishing boats use the vessel pass (V-Pass) equipment, which automatically reports the location of the vessel to the VTS center and transmits a distress signal if the ship is tilted beyond a certain angle. However, for security reasons, the permission and agreement of the Korean Coast Guard are required before V-Pass data can be collected, analyzed, and used. Therefore, the V-Pass data are excluded from this study.

The functions of the interface module, such as a serial communication port and speed settings, that store the text related to the received data, remove the list, and exit this interface module are shown on the upper left corner of the screen. The list of the received AIS raw data is shown on the bottom left corner of the screen. The interface module was implemented to show compressed messages on the right side after they are converted and processed according to the IEC 61162-1/2 (NMEA 0183) international standard. Fig. 4 shows the implemented interface module.

Fig. 4

GUI of the interface module

5. Component Testing and Performance Verification

A test program was developed to evaluate the performance of the ship and crew recognition algorithm. The test program places a blue border around the object deemed to be the pre-defined object in the input image using the object’s coordinates. The program also places a red border around the object recognized using the algorithm and prints and saves the screen. Moreover, the program was implemented to calculate and display the mean average precision (mAP) for the object recognition performance, which is used as a performance indicator in object detection related application fields, to represent the object recognition performance quantitatively. First, 120 images were collected from an Internet portal to test the recognition of ships. Fig. 5 shows some of the results obtained by inputting the collected images into the test program.

Fig. 5

Test result image sets for ship recognition

In the result image shown in Fig. 6, there are three vessels in the picture. It is evident that the blue border area was defined by the ships, and the red border area was based on the algorithm overlap, and there is not much difference between them. Also, the implemented algorithm showed a ship recognition performance of 50.44 mAP, and Fig. 7 shows this result.

Fig. 6

Test result for the ship recognition

Fig. 7

mAP for ship recognition

The evaluation data were collected to test crew recognition using the same method. The data were input into the program, and tests were performed. Fig. 8 shows some of the test results on 120 evaluation images collected for the crew recognition, and Fig. 9 shows one of the resulting images. In Fig. 9, there are three people in the picture. However, the result marked by the algorithm shows one additional border area in addition to the three border areas overlapping the defined areas. The analysis revealed that the algorithm misidentified the guardrail of the yacht as a person. To improve the recognition accuracy, pictures of people taken from various angles must be obtained, and further learning is required. The crew recognition performance of the implemented algorithm is 46.76 mAP, and this result is shown in Fig. 10.

Fig. 8

Test result image sets for crew recognition

Fig. 9

Test result for the crew recognition

Fig. 10

mAP for the crew recognition

The AIS was installed to test the interface module. The compressed information for the interface and the message numbers for the data received via the AIS were converted and processed, as shown in Fig. 11. The tenth item in the list of the received AIS raw data is message number 1. It indicates that a vessel with MMSI number 352689000 is navigating using an engine. This vessel is moving at a speed of approx. 8.0 km/h with a heading of 89° at a longitude of 128°49′30.2″E and a latitude of 35°04′20.4″N. The same raw data were input into the AIS message decoder available online. A comparison of the two results confirmed that the interface module processed the message accurately (Thomas, n.d.).

Fig. 11

Test result for the implemented interface module

6. Conclusion and Future Studies

This paper described a fundamental study on a video-integrated collision prediction and fall detection system to support small and medium-sized vessels in navigating safely. Based on artificial intelligence, this system recognizes ships and the crew in the images obtained from a CCTV camera installed on the vessel, predicts collision, and detects an overboard fall. Furthermore, the system converts the static and dynamic data for a vessel collected via the AIS and displays the images and information in an integrated manner. The system also alerts the navigator with warnings in each phase according to the analysis result of dangerous situations. The system components were designed, and the ship and crew recognition algorithm and the interface module were implemented and tested. The result showed a ship recognition performance of 50.44 mAP and a crew recognition performance of 46.76 mAP. The interface module was tested by verifying whether the message received through the installed AIS is converted and processed according to international standards.

We are currently developing the collision prediction and fall detection module to predict accidents involving the ships and crew recognized in the images. We are also researching an overlay technique specialized for maritime cameras to display information in an integrated manner. Furthermore, we are improving the algorithm’s object recognition accuracy. We plan to complete the development of the designed video-integrated collision prediction and fall detection system in the future. We also plan to verify the performance and function of the system in the maritime environment using a real vessel to ensure that the system is useful.

Notes

This research was financially supported by the Ministry of Trade, Industry and Energy (MOTIE) and Korea Institute for Advancement of Technology (KIAT) through the National Innovation Cluster R&D program (P0006893, A Development of Predictive Maintenance System for Core Facilities in Maneuvering Vessel), and this research was financially supported by the Ministry of Trade, Industry and Energy (MOTIE) and Korea Institute for Advancement of Technology (KIAT) through the Encouragement Program for The Industries of Economic Cooperation Region (Grant Number : P0008664).

References

Bloisi D, Iocchi L, Fiorini M, Graziano G. 2011. Automatic Maritime Surveillance with Visual Target Detection. In : Proceedings of International Defense and Homeland Security Simulation Workshop. Rome, Italy. p. 141–145.
Bloisi D, Previtali F, Pennisi A, Nardi D, Fiorini M. 2016;Enhancing Automatic Maritime Surveillance Systems with Visual Information. IEEE Transactions on Intelligent Transportation Systems 18(4):824–833. https://doi.org/10.1109/TITS.2016.2591321.
Common Objects in COntext (COCO). 2014;2014 Train images. Retrieved August 2020 from http://images.cocodataset.org/zips/train2014.zip.
Hwang HG, Kim BS, Kim HW, Kang YS, Kim DH. 2018;A Development of Active Monitoring and Approach Alarm System for Marine Buoy Protection and Ship Accident Prevention Based on Trail Cameras and AIS. Journal of Korea Institute of Information and Communication Engineering 22(7):1021–1029. https://doi.org/10.6109/jkiice.2018.22.7.1021.
Joo KS, Jeong JS, Kim CS, Jeong JY. 2011;The Vessels Traffic Measurement and Real-time Track Assessment using Computer Vision. Journal of Korean Society of Marine Environment & Safety 17(2):131–136. https://doi.org/10.7837/kosomes.2011.17.2.131.
Jung CH. 2013;A Study on the Requirement to the Fishing Vessel for Reducing the Collision Accidents. Journal of Korean Society of Marine Environment and Safety 20(1):18–25. https://doi.org/10.7837/kosomes.2014.20.1.018.
Kim BS, Hwang HG, Woo YT, Yoo JY, Kim TS. 2020. A Conceptual Design of Video-complex Collision Risk Prevention and Alarm System for Safety Navigation of Small and Medium Ships. In : Proceeding of the Korea Institute of Information and Communication Engineering. Gyeongju, Korea. p. 522–524.
Kim HT, Na S, Ha WH. 2011;A Case Study of Marine Accident Investigation and Analysis with Focus on Human Error. Journal of Ergonomics Society of Korea 30(1):137–150. https://doi.org/10.5143/JESK.2011.30.1.137.
Kim SR, Jeong JS. 2017;A Study on Search and Rescue Method in Manoverboard Using Drone. Journal of Korean Institute of Intelligent Systems 27(3):236–240. https://doi.org/10.5391/JKIIS.2017.27.3.236.
Lee JM, Lee KH, Kim DS. 2013;Image Analysis Module for AR-based Navigation Information Display. Journal of Ocean Engineering and Technology 27(3):22–28. https://doi.org/10.5574/KSOE.2013.27.3.022.
Lee JS, Lee SK, Kim DW, Hong SJ, Yang SI. 2018;Trends on Object Detection Techniques Based on Deep Learning. Jounal of Electronics and Telecommunications Trends 33(4):23–32.
Ministry of Oceans and Fisheries (MOF). 2020. Marine Accident Statistics. Retrieved March 2020 from https://www.kmst.go.kr/kmst/statistics/annualReport/selectAnnualReportList.do.
Park TG, Kim SJ, Chu YS, Kim TS, Ryu KJ, Lee YW. 2018;Reduction Plan of Marine Casualty for Small Fishing Vessels. Journal of Korean Society of Fisheries and Ocean Technology 54(2):173–180. https://doi.org/10.3796/KSFOT.2018.54.2.173.
Redmon J, Divvala S, Girshick R, Farhadi A. 2016. You Only Look Once: Unified, Real-Time Object Detection. In : Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA. p. 779–788. https://doi.org/10.1109/CVPR.2016.91.
Thomas BS. (n.d.). Online AIS Message Decoder. Retrieved August 2020 from http://ais.tbsalling.dk/decode#.
Xiao L, Xu M, Hu Z. 2018;Real-time Inland CCTV Ship Tracking. Mathematical Problems in Engineering 2018;:1205210. https://doi.org/10.1155/2018/1205210.

Article information Continued

Fig. 1

AIS Receiver used by the proposed system

Fig. 2

Conceptual diagram of the designed system

Fig. 4

GUI of the interface module

Fig. 5

Test result image sets for ship recognition

Fig. 6

Test result for the ship recognition

Fig. 7

mAP for ship recognition

Fig. 8

Test result image sets for crew recognition

Fig. 9

Test result for the crew recognition

Fig. 10

mAP for the crew recognition

Fig. 11

Test result for the implemented interface module