### 1. Introduction

### 2. Camera Calibration

### 2.1 Principle of Calibration

*M*is an observation equation for the image coordinates (

_{i}*x*

_{i}_{,}

*y*) and the spatial coordinates (

_{i}*X*,

*Y*,

*Z*) for the

*i*-th camera. For every camera, the corresponding observation equation is used to convert the spatial coordinates to the image coordinates. Conversely, a point on the image is represented in a 3D space by using the inverse transform of Eq. (2):

*Z*in the space. Therefore, a point on the image has a solution corresponding to a straight line in the space, and it is represented as a line in each camera. Finally, the position that minimizes the LOS error of the matching particles of each camera is determined as a 3D position. Various methods of camera observation equations have shown the relation between the space and the camera image; however, in this study, we used an observation equation that had 10 elements [camera’s external elements (

*d*,

*α*,

*β*,

*κ*,

*m*

_{x}_{,}

*m*) and internal elements (

_{y}*c*

_{x}_{,}

*c*

_{y}_{,}

*k*

_{1},

*k*

_{2})] used by Doh et al. (2012a), which were intuitively represented by using the camera distances and rotation angles. It is expressed as follows:

*c*and

_{x}*c*denote the ratio between the image and space for the x- and y-axes, respectively, and

_{y}*d*denotes the shortest distance between the center of the camera and the plane that passes through the zero-point of the space.

*X*,

_{m}*Y*, and

_{m}*Z*are the spatial coordinates rotated by

_{m}*α*,

*β*, and

*κ*on the X-, Y-, and Z-axes, respectively, in the space.

*m*and

_{x}*m*denote the misalignment between the z-axis in the image space and the Z-axis in the 3D space, and ∆

_{y}*x*and ∆

*y*are the equations that represent the degree of refraction of the lens, which are expressed as follows:

### 2.2 Camera Self-calibration

*P*(

*X*,

*Y*,

*Z*) in the space and a position point

*p*(

*x*,

*y*) on the image to determine the element values of the observation equation that minimize the error of Eqs. (3)–(5). It uses special marks such as circles or cross marks on the calibration plate to provide the X and Y information; by providing the Z information while vertically moving at certain intervals on the calibration plate, the 3D position information

*P*(

*X*,

*Y*,

*Z*) is obtained. Here, the cameras are used to obtain the image of the calibration plate, and image processing is used to obtain the position of the marked points,

*p*(

*x*,

*y*). Using the position information that is obtained in this way, the element values of the observation equation are determined in such a way that the RMS (root-mean-square) error of the position points is minimized; based on this, the camera is calibrated. The obtained camera calibration values have many errors arising from the error of the mark points on the calibration plate, the error of Z-axis movement on the calibration plate, the error of image acquisition device, algorithm errors occurring during the process of finding the center and finding the optimal solution, and others. These errors have adverse effects on the reconstruction of voxel images. Moreover, the reliability of the 3D measurement result can be considerably improved by minimizing these errors.

*i*-th particle,

*p*

_{1,}

*in the image of camera 1 is utilized to obtain the LOS. (c) The obtained LOS is projected to camera 2, and all particles,*

_{i}*p*

_{2,}

*, within a certain distance (1.5 voxels is used in this study) from the LOS are obtained. (d) Particles*

_{j}*p*

_{1,}

*and*

_{i}*p*

_{2,}

*that are selected in the two cameras are used to find the 3D position,*

_{j}*P*

_{1,2}. In other words,

*P*

_{1,2}is the position in the 3D space where two straight lines meet, i.e., the LOS of

*p*

_{1,}

*and the LOS of*

_{i}*p*

_{2,}

*. (e) When the LOS for a single particle is projected to the other camera, the projected LOS is a straight line across the image in the other camera. Therefore, multiple particles exist on the straight line, and from this, results are obtained, including not only the actual particles but also many virtual particles. To reduce the number of such virtual particles, the obtained 3D particle,*

_{j}*P*

_{1,2}, is projected onto the images of the remaining cameras 3 and 4. (f) If a particle exists within a certain distance (1.5 voxels) from the position of the particle projected onto the camera (

*p*

_{3,}

_{k}_{1}and

*p*

_{4,}

_{k}_{2}), then it is determined to be a real particle; otherwise, it is determined to be a virtual particle. (g) For only the particles that are determined to be real particles, the new 3D particle’s position is calculated using the least square method from the selected 2D particle positions (

*p*

_{1,}

*,*

_{i}*p*

_{2,}

*,*

_{j}*p*

_{3,}

_{k}_{1}and

*p*

_{4,}

_{k}_{2}) corresponding to each camera. (h) This process is used to find all possible 3D particles for every particle of camera 1, and the particles corresponding to the coincident regions that are set up based on the voxel position are classified and collected. (h) The 3D particles that are collected for each region are used to compose a disparity map.

### 2.3 Evaluation of Camera Self-calibration using Virtual Image

*d*is the vector size,

*u*,

*v*, and

*w*are the vector components.

*l*represents the thickness of the ring vortex,

*r*

_{1}and

*r*

_{2}denote the distance from the center of the ring vortex. Fig. 2 shows the virtual ring vortex flow field that is applied in this study, and its thickness and size are 16 mm and 40 mm, respectively.

*i*denotes a pixel in the camera image,

*j*denotes a voxel in the 3D space,

*I*is the virtual image, and

*V*is the voxel image.

*w*denotes the weight according to the distance between the camera pixel’s LOS and voxel. In other words, the brightness of the virtual image is determined by the sum of values obtained by multiplying the weight by all voxels existing in a certain range of the LOS. In this study, the above method was used to produce voxel images and the virtual image for each camera.

_{ij}*μ*is the coefficient of convergence, and

*ω*is the coefficient of weight according to the distance between the LOS for the

*i*coordinate of the image selected as the weight and the voxel

*j*. In other words, this algorithm reconstructs the voxel image in such a way that the ratio of the sum of all voxel images existing on the LOS to the brightness of the image at the pixel position on the image will converge to 1. The voxel image reconstructed by the MART method and the voxel image produced virtually were directly compared using the following equation to evaluate the accuracy of the proposed algorithm:

*Q*denotes the recovery ratio, which indicates the reconstruction accuracy, and the superscripts

*R*and

*C*represent the reconstructed and created voxel images, respectively. Fig. 5 shows the recovery ratio of the voxel image according to the PPP (number of particles per pixel). Here, the recovery ratio indicates the degree of recovery of the particles, which existed before the calculation, after performing the calculation based on the virtual image. The voxel images were created by setting PPP to 0.002–0.1. The performance of the constructed calculation algorithm was determined by evaluating the recovery ratio. Fig. 5 shows the reconstruction performance results that are expressed by Eq. (9) of the voxel image according to the result of the self-calibration for each PPP. When the recovery ratio is 1, it means that the reconstructed image and the created image match perfectly.

### 3. Performance Evaluation Using Experimental Data

^{3}, and a pump having a flow rate of 540 L/h was installed to create a constantly rotating flow. Then, polyamide nylon particles with a diameter of 50 microns were inserted. An 8-W Laser System Europe Blits Pro Laser was used as the light source, and the laser light was projected with a thickness of about 10 × 10 mm, as indicated by the green region in Fig. 6. Four high-speed cameras were installed by arranging them in a row in such a way that the rotation angle will be approximately −15°, −5°, 5°, and 15° from the front side.

^{3}. Fig. 7 shows an image of the first camera obtained in the experiment. To minimize the number of virtual particles, a small number of particles were entered to obtain a low-density particle image (1,000 particles, or about 0.001 PPP in an image of 1,216 × 1,200 resolution); then, after performing the self-calibration, a high-density particle image (about 10,000 particles or 0.01 PPP) was obtained for the tomographic PIV measurement, and the velocity vector was measured.

*v*is the value located in the middle when a total of 27 vectors were arranged in 3 × 3 × 3 regions.

_{m}*v*is the current vector. When the rate of change of the vector was larger than median

_{i}*v*, it was determined to be an error. Fig. 9 shows the error rate of all vectors when the evaluation was performed with the median filter. When the camera’s self-calibration was not performed, ∼10% of all vectors were determined to be errors. After completing the self-calibration stage, the error rate decreased to 8.8%, and as the self-calibration was repeated, the result showed a slight improvement. However, after repeating it more than three times, there was no further improvement in the result. Instead, when the self-calibration was performed five times, the error rate increased by 0.002%, which was negligible. Based on these results, it is found that the optimal frequency of performing the self-calibration is three times in the case of the given experiment. Furthermore, the optimal frequency of self-calibration can be determined through the error rate analysis using the median filter.

_{m}