### 1. Introduction

### 2. Research Process

### 2.1 Machine Learning

### 2.2 Perceptron

*x*

_{0},

*x*

_{1},

*x*

_{2}…

*x*);

_{n}*w*

_{0},

*w*

_{1},

*w*

_{2}…

*w*) and input value;

_{n}### 2.3 Multi-Layer Perceptron (MLP)

*x*

_{0},

*x*

_{1},

*x*

_{2}…

*x*);

_{n}*w*

_{0},

*w*

_{1},

*w*

_{2}…

*w*);

_{n}### 2.4 Data Collection

### 2.5 Structure of Artificial Neural Network (ANN)

### 3. Learning Result and Discussion

### 3.1 Accuracy Assessment Indices

#### 3.1.1 Random number change of the learning model

#### 3.1.2 Root mean square error (RMSE)

*y*denotes the RAO value obtained via simulation using an in-house code, and

_{i}*ŷ*denotes the RAO value as predicted by the learning model.

_{i}#### 3.1.3 Standard deviation (SD)

#### 3.1.4 Correlation coefficient

*ρ*) was used as an index to measure the direction and intensity of the linear relationship. The correlation coefficient can be expressed as shown in Eq. (3):

### 3.2 Case Information

### 3.3 Learning Results by Data Number

*x*-axis represents the length in Fig. 4(a), breadth in Fig. 4(b), and draft in Fig. 4(c), and the

*y*-axis represents the number. The purpose of using similar data distributions by data number was to preprocess the data configurations before learning, as biased data can distort the accuracy of a learning model.

#### 3.3.1 Seed number: 14

#### 3.3.2 Seed number: 18

### 3.4 Learning Results with Different Numbers of Hidden Layers

### 3.5 Learning Results with Different Numbers of Neurons in the Hidden Layer

### 3.6 Analysis of the Optimal Model Learning Result and Discussion

*L*,

*D*,

*V*,

*I*

_{44},

*C*

_{44},

*GM*) associated with the ship’s breadth in the assessment and test data of seed number 14 were determined and ranked. For example, the breadth (

_{T}*B*) and volume (

*V*) of the training data showed a high positive correlation of 0.839, but in the assessment data, it was 0.737, i.e., a lower correlation than in the training data. Moreover, the correlations for features such as the coefficient of restitution (which is directly related to the location of the resonance point) were different. This indicates that the low accuracy was caused by the differences between the trends of the training and assessment data.

### 4. Conclusions

*L*,

*B*,

*D*,

*V*,

*k*

_{44},

*I*

_{44},

*C*

_{44},

*GM*) were generated using the specifications of 500 barge-type ships registered with classification societies. In addition, the values for the roll RAO were obtained by simulating the 500 ships using an in-house code based on a 3D singularity distribution method, and the features and RAOs of the data sets were configured. Finally, the data were composed with the RAOs in the range of 0.1–2.0 rad/s, according to the major specifications of the barge-type ships. For the learning model, an ANN was created using Python’s TensorFlow, and a DNN technique with two or more hidden layers was used. The accuracy of the learning results was determined by changing the number of datapoints, number of hidden layers, and node numbers in the hidden layer. The RMSE, SD, correlation coefficient, and scatter plot were used as accuracy indices. When the RMSE and SD were considered together, the optimal results were obtained in case 4 [DN500_L2_NN (100,100)]. Finally, the shortcomings of the learning model and possible improvements were examined through an analysis of the accuracy of Case 4.

_{T}