A Bidirectional Self With Bayesian Modeling for Vision Based Crack Detection

A Bidirectional Self With Bayesian Modeling for Vision Based Crack Detection

A Bidirectional Self With Bayesian Modeling for Vision Based Crack Detection
A Bidirectional Self With Bayesian Modeling for Vision Based Crack Detection

Abstract
Robotic vision is increasingly applied for surface inspection of built
infrastructure. For this, it is essential to develop robust algorithms for semantic
segmentation. This article presents a deep learning approach using a
bidirectional self-rectifying ne
improving detection accuracy, in dealing with the embedded uncertainty
caused by false-positive labels and nonlinearity in sequentially convolutional
blocks. For integration with residual encoders, a feature preserv
designed, wherein the output of previous dilated convolutional blocks is
upsizedly or downsizedly passed on and concatenated with the following
blocks recursively and bidirectionally. Further, to achieve robustness in
feature representation with an acceptable level of credibility, convolutional
kernels are randomized via a Bayesian model and adjusted per evidence
update. As such, the network becomes less sensitive to uncertainty and
Self-Rectifying Network
network with Bayesian modeling (BSNBM) for
ith Vision-
twork preserving branch is
redundant nonlinearity, which is inevitable in activation layers. Experimental
results confirm the advantage of our BSNBM over current crack detection
approaches.