﻿ 基于迁移学习的单目菌落深度提取算法
 光学仪器  2020, Vol. 42 Issue (2): 39-44 PDF

Monocular colony depth extraction algorithms based on transfer learning
DENG Xiangzhou, ZHANG Rongfu
School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
Abstract: High-throughput colony sorter is an important equipment for bacteria screening in the biopharmaceutical industry. It uses colony image for intelligent identification and selection, but at present, the equipment only recognizes two-dimensional location information. In order to solve the problem of three-dimensional colony information extraction, this paper proposes a monocular image colony depth extraction algorithm based on transfer learning. The algorithm is based on residual network, combined with multi-scale network structure to extract features, and adopts unsupervised transfer learning training mode, so that the network can estimate the colony depth information. The experimental results show that the average relative error of the algorithm is 0.171, the root mean square error is 6.198, and the log root mean square error is 0.256. The accuracy of the results under the threshold value of 1.25 is increased to 76.4%. The algorithm can obtain the depth information and surface characteristics of the colony at the same time, which provides a referencefor further improving the screening accuracy and effectively selecting the colony.
Key words: colony selection    monocular vision    depth estimation    transfer learning

1 单目深度估计的基本原理

 $z = \frac{{fB}}{{{x_l} - {x_r}}} = \frac{{fB}}{d}$ (1)

 图 1 挑选仪设备结构图 Figure 1 Structural diagram of selecting equipment
2 菌落深度信息及其位置获取算法 2.1 网络结构

 $H\left( x \right) = F\left( x \right) + x$ (2)

 图 2 残差模块结构图解 Figure 2 Structural diagram of residual module

 图 3 深度估计网络结构图 Figure 3 Deep estimation network structure diagram
2.2 损失函数

 $E = \frac{1}{N}\mathop \sum \nolimits_{i,j} \alpha \left( {1 - {\rm{SSIM}}\left( {{{\tilde I}_{i,j}},{I_{i,j}}} \right)} \right) + \left( {1 - \alpha } \right)||{\tilde I_{i,j}} - {I_{i,j}}||$ (3)

2.3 基于迁移学习的网络训练方法

 图 4 菌落数据集图像 Figure 4 Colony dataset images

3 实验与结果分析

 ${{\rm{\delta }}_{\rm{Rel}}}{\rm{}} = \frac{1}{T}\mathop \sum \nolimits_i \frac{{\left| {{d_i} - {{\tilde d}_i}} \right|}}{{{{\tilde d}_i}}}$ (4)

 ${{\rm{\delta }}_{\rm{RMS}}} = \sqrt {\frac{1}{T}\mathop \sum \nolimits_i ||{d_i} - {{\tilde d}_i}||^2}$ (5)

 ${{\rm{\delta }}_{\rm{logRMS}}} = \sqrt {\frac{1}{T}\mathop \sum \nolimits_i || {\rm{ln}} \left( {{d_i}} \right) - {\rm{ln}} {{\left( {{{\tilde d}_i}} \right)}||^2}}$ (6)

 ${\rm{\delta }} = \max \left( {\frac{{{d_i}}}{{{{\tilde d}_i}}},\frac{{{{\tilde d}_i}}}{{{d_i}}}} \right) < t$ (7)

 图 5 视差图效果对比 Figure 5 Comparison of disparity map effects
4 结论与展望

 [1] 冯向东, 罗玮, 张荣福. 菌落挑选仪挑选通量提升算法研究[J]. 软件导刊, 2017, 16(7): 48–51. [2] 何睿, 陈戈. 基于细胞自动机的菌落识别的研究[J]. 标准科学, 2012(6): 50–53, 62. DOI:10.3969/j.issn.1674-5698.2012.06.011 [3] 靳盼盼. 双目立体视觉测距技术研究[D]. 西安: 长安大学, 2014. [4] WANG H H, RAJ B. A survey: time travel in deep learning space: an introduction to deep learning models and how deep learning models evolved from the initial ideas[J]. Computer Science, 2015, 226(1−4): 23–34. [5] EIGEN D, PUHRSCH C, FERGUS R. Depth map prediction from a single image using a multi-scale deep network[C]//Proceedings of the 27th international conference on neural information processing systems. Montreal, Canada: MIT Press, 2014. [6] SHELHAMER E, LONG J, DARRELL T. Fully convolutional networks for semantic segmentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(4): 640–651. DOI:10.1109/TPAMI.2016.2572683 [7] LAINA I, RUPPRECHT C, BELAGIANNIS V, et al. Deeper depth prediction with fully convolutional residual networks[C]//Proceedings of the 4th international conference on 3D vision (3DV). Stanford: IEEE, 2016. [8] HE K M, ZHANG X Y, REN S Q, et al. Deep residual learning for image recognition[C]//Proceedings of 2016 IEEE conference on computer vision and pattern recognition. Las Vegas: IEEE, 2016. [9] GARG R, VIJAY KUMAR B G, CARNEIRO G, et al. Unsupervised CNN for single view depth estimation: geometry to the rescue[C]//Proceedings of the 14th European conference. Cham: Springer, 2016: 740−756. [10] WANG Z, BOVIK A C, SHEIKH H R, et al. Image quality assessment: from error visibility to structural similarity[J]. IEEE Transactions on Image Processing, 2004, 13(4): 600–612. DOI:10.1109/TIP.2003.819861 [11] PAN S J, YANG Q. A survey on transfer learning[J]. IEEE Transactions on Knowledge and Data Engineering, 2010, 22(10): 1345–1359. DOI:10.1109/TKDE.2009.191 [12] GEIGER A, LENZ P, STILLER C, et al. Vision meets robotics: the KITTI dataset[J]. International Journal of Robotics Research, 2013, 32(11): 1231–1237. DOI:10.1177/0278364913491297 [13] KINGMA D P, BA J. Adam: a method for stochastic optimization[J]. Computer Science, 2014. [14] SAXENA A, SUN M, Ng A Y. Make3D: learning 3D scene structure from a single still image[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2009, 31(5): 824–840. DOI:10.1109/TPAMI.2008.132