doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1706.06064 | 144 | [164] R. Baeza-Yates, B. Ribeiro-Neto et al., Modern information retrieval. ACM press New York., 1999, vol. 463.
[165] J. Cai, Q. Liu, F. Chen, D. Joshi, and Q. Tian, âScalable image search with multiple index tables,â in International Conference on Multimedia Retrieval (ICMR). ACM, 2014, p. 407.
[166] L. Zheng, S. Wang, and Q. Tian, âCoupled binary embedding for large-scale image retrieval,â IEEE Transactions on Image Processing (TIP), vol. 23, no. 8, pp. 3368â3380, 2014.
[167] X. Zhang, Z. Li, L. Zhang, W.-Y. Ma, and H.-Y. Shum, âEfï¬cient indexing for large scale visual search,â in IEEE International Conference on Computer Vision.
[168] C. Silpa-Anan and R. Hartley, âOptimised kd-trees for fast image descriptor matching,â in IEEE Conference on Computer Vision and Pattern Recognition (CVPR). | 1706.06064#144 | Recent Advance in Content-based Image Retrieval: A Literature Survey | The explosive increase and ubiquitous accessibility of visual data on the Web
have led to the prosperity of research activity in image search or retrieval.
With the ignorance of visual content as a ranking clue, methods with text
search techniques for visual retrieval may suffer inconsistency between the
text words and visual content. Content-based image retrieval (CBIR), which
makes use of the representation of visual content to identify relevant images,
has attracted sustained attention in recent two decades. Such a problem is
challenging due to the intention gap and the semantic gap problems. Numerous
techniques have been developed for content-based image retrieval in the last
decade. The purpose of this paper is to categorize and evaluate those
algorithms proposed during the period of 2003 to 2016. We conclude with several
promising directions for future research. | http://arxiv.org/pdf/1706.06064 | Wengang Zhou, Houqiang Li, Qi Tian | cs.MM, cs.IR | 22 pages | null | cs.MM | 20170619 | 20170902 | [
{
"id": "1504.03410"
}
] |
1706.06064 | 145 | [169] L. Zheng, S. Wang, Z. Liu, and Q. Tian, âFast image retrieval: Query pruning and early termination,â IEEE Transactions on Multimedia (TMM), vol. 17, no. 5, pp. 648â659, 2015.
[170] R. Ji, L.-Y. Duan, J. Chen, L. Xie, H. Yao, and W. Gao, âLearning to distribute vocabulary indexing for scalable visual search,â IEEE Transactions on Multimedia (TMM), vol. 15, no. 1, pp. 153â166, 2013.
[171] J.-P. Heo, Y. Lee, J. He, S.-F. Chang, and S.-E. Yoon, âSpherical hashing,â in IEEE Conference on Computer Vision and Pattern Recognition (CVPR). | 1706.06064#145 | Recent Advance in Content-based Image Retrieval: A Literature Survey | The explosive increase and ubiquitous accessibility of visual data on the Web
have led to the prosperity of research activity in image search or retrieval.
With the ignorance of visual content as a ranking clue, methods with text
search techniques for visual retrieval may suffer inconsistency between the
text words and visual content. Content-based image retrieval (CBIR), which
makes use of the representation of visual content to identify relevant images,
has attracted sustained attention in recent two decades. Such a problem is
challenging due to the intention gap and the semantic gap problems. Numerous
techniques have been developed for content-based image retrieval in the last
decade. The purpose of this paper is to categorize and evaluate those
algorithms proposed during the period of 2003 to 2016. We conclude with several
promising directions for future research. | http://arxiv.org/pdf/1706.06064 | Wengang Zhou, Houqiang Li, Qi Tian | cs.MM, cs.IR | 22 pages | null | cs.MM | 20170619 | 20170902 | [
{
"id": "1504.03410"
}
] |
1706.06064 | 146 | [172] J. Tang, Z. Li, M. Wang, and R. Zhao, âNeighborhood discrim- inant hashing for large-scale image retrieval,â IEEE Transactions on Image Processing (TPI), vol. 24, no. 9, pp. 2827â2840, 2015. [173] L. Wu, K. Zhao, H. Lu, Z. Wei, and B. Lu, âDistance preserv- ing marginal hashing for image retrieval,â in IEEE International Conference on Multimedia and Expo (ICME), 2015, pp. 1â6.
[174] K. Jiang, Q. Que, and B. Kulis, âRevisiting kernelized locality- sensitive hashing for improved large-scale image retrieval,â in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 4933â4941.
[175] H. Liu, R. Wang, S. Shan, and X. Chen, âDeep supervised hashing for fast image retrieval,â in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 2064â2072. | 1706.06064#146 | Recent Advance in Content-based Image Retrieval: A Literature Survey | The explosive increase and ubiquitous accessibility of visual data on the Web
have led to the prosperity of research activity in image search or retrieval.
With the ignorance of visual content as a ranking clue, methods with text
search techniques for visual retrieval may suffer inconsistency between the
text words and visual content. Content-based image retrieval (CBIR), which
makes use of the representation of visual content to identify relevant images,
has attracted sustained attention in recent two decades. Such a problem is
challenging due to the intention gap and the semantic gap problems. Numerous
techniques have been developed for content-based image retrieval in the last
decade. The purpose of this paper is to categorize and evaluate those
algorithms proposed during the period of 2003 to 2016. We conclude with several
promising directions for future research. | http://arxiv.org/pdf/1706.06064 | Wengang Zhou, Houqiang Li, Qi Tian | cs.MM, cs.IR | 22 pages | null | cs.MM | 20170619 | 20170902 | [
{
"id": "1504.03410"
}
] |
1706.06064 | 147 | [176] M. Datar, N. Immorlica, P. Indyk, and V. S. Mirrokni, âLocality- sensitive hashing scheme based on p-stable distributions,â in Annual Symposium on Computational Geometry. ACM, 2004, pp. 253â262.
[177] Y. Avrithis, G. Tolias, and Y. Kalantidis, âFeature map hashing: sub-linear indexing of appearance and global geometry,â in ACM International Conference on Multimedia (MM). ACM, 2010, pp. 231â240.
[178] G. Tolias, Y. Kalantidis, Y. Avrithis, and S. Kollias, âTowards large-scale geometry indexing by feature selection,â Computer Vision and Image Understanding, vol. 120, pp. 31â45, 2014. [179] H. J´egou, M. Douze, and C. Schmid, âPacking bag-of-features,â in
International Conference on Computer Vision, 2009, pp. 2357â2364. | 1706.06064#147 | Recent Advance in Content-based Image Retrieval: A Literature Survey | The explosive increase and ubiquitous accessibility of visual data on the Web
have led to the prosperity of research activity in image search or retrieval.
With the ignorance of visual content as a ranking clue, methods with text
search techniques for visual retrieval may suffer inconsistency between the
text words and visual content. Content-based image retrieval (CBIR), which
makes use of the representation of visual content to identify relevant images,
has attracted sustained attention in recent two decades. Such a problem is
challenging due to the intention gap and the semantic gap problems. Numerous
techniques have been developed for content-based image retrieval in the last
decade. The purpose of this paper is to categorize and evaluate those
algorithms proposed during the period of 2003 to 2016. We conclude with several
promising directions for future research. | http://arxiv.org/pdf/1706.06064 | Wengang Zhou, Houqiang Li, Qi Tian | cs.MM, cs.IR | 22 pages | null | cs.MM | 20170619 | 20170902 | [
{
"id": "1504.03410"
}
] |
1706.06064 | 148 | International Conference on Computer Vision, 2009, pp. 2357â2364.
[180] O. Chum, J. Philbin, M. Isard, and A. Zisserman, âScalable near identical image and shot detection,â in Proceedings of the ACM International Conference on Image and Video Retrieval, 2007, pp. 549â 556.
[181] Z. Lin and J. Brandt, âA local bag-of-features model for large- scale object retrieval,â in European Conference on Computer Vision (ECCV). Springer, 2010, pp. 294â308.
[182] H. Jegou, C. Schmid, H. Harzallah, and J. Verbeek, âAccurate image search using the contextual dissimilarity measure,â IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 1, pp. 2â11, 2010. | 1706.06064#148 | Recent Advance in Content-based Image Retrieval: A Literature Survey | The explosive increase and ubiquitous accessibility of visual data on the Web
have led to the prosperity of research activity in image search or retrieval.
With the ignorance of visual content as a ranking clue, methods with text
search techniques for visual retrieval may suffer inconsistency between the
text words and visual content. Content-based image retrieval (CBIR), which
makes use of the representation of visual content to identify relevant images,
has attracted sustained attention in recent two decades. Such a problem is
challenging due to the intention gap and the semantic gap problems. Numerous
techniques have been developed for content-based image retrieval in the last
decade. The purpose of this paper is to categorize and evaluate those
algorithms proposed during the period of 2003 to 2016. We conclude with several
promising directions for future research. | http://arxiv.org/pdf/1706.06064 | Wengang Zhou, Houqiang Li, Qi Tian | cs.MM, cs.IR | 22 pages | null | cs.MM | 20170619 | 20170902 | [
{
"id": "1504.03410"
}
] |
1706.06064 | 149 | [183] D. Qin, C. Wengert, and L. Van Gool, âQuery adaptive similarity for large scale object retrieval,â in IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2013, pp. 1610â1617. [184] M. Donoser and H. Bischof, âDiffusion processes for retrieval revisited,â in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013, pp. 1320â1327.
[185] L. Zheng, S. Wang, Z. Liu, and Q. Tian, âLp-norm IDF for large scale image search,â in IEEE Conference on Computer Vision and Pattern Recognition, 2013.
[186] L. Zheng, S. Wang, and Q. Tian, âLp-norm IDF for scalable image retrieval,â IEEE Transactions on Image Processing, vol. 23, no. 8, pp. 3604â3617, 2014.
[187] X. Shen, Z. Lin, J. Brandt, S. Avidan, and Y. Wu, âObject retrieval and localization with spatially-constrained similarity measure and k-nn re-ranking,â in IEEE Conference on Computer Vision and Pattern Recognition, 2012, pp. 3013â3020.
21 | 1706.06064#149 | Recent Advance in Content-based Image Retrieval: A Literature Survey | The explosive increase and ubiquitous accessibility of visual data on the Web
have led to the prosperity of research activity in image search or retrieval.
With the ignorance of visual content as a ranking clue, methods with text
search techniques for visual retrieval may suffer inconsistency between the
text words and visual content. Content-based image retrieval (CBIR), which
makes use of the representation of visual content to identify relevant images,
has attracted sustained attention in recent two decades. Such a problem is
challenging due to the intention gap and the semantic gap problems. Numerous
techniques have been developed for content-based image retrieval in the last
decade. The purpose of this paper is to categorize and evaluate those
algorithms proposed during the period of 2003 to 2016. We conclude with several
promising directions for future research. | http://arxiv.org/pdf/1706.06064 | Wengang Zhou, Houqiang Li, Qi Tian | cs.MM, cs.IR | 22 pages | null | cs.MM | 20170619 | 20170902 | [
{
"id": "1504.03410"
}
] |
1706.06064 | 150 | 21
[188] H. Xie, K. Gao, Y. Zhang, J. Li, and Y. Liu, âPairwise weak geometric consistency for large scale image search,â in ACM International Conference on Multimedia Retrieval (ICMR). ACM, 2011, p. 42.
[189] S. M. Katz, âDistribution of content words and phrases in text and language modelling,â Natural Language Engineering, vol. 2, no. 01, pp. 15â59, 1996.
[190] H. J´egou, M. Douze, and C. Schmid, âOn the burstiness of visual elements,â in IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[191] M. Shi, Y. Avrithis, and H. J´egou, âEarly burst detection for memory-efï¬cient image retrieval,â in IEEE Conference on Com- puter Vision and Pattern Recognition (CVPR), 2015.
[192] S. Bai and X. Bai, âSparse contextual activation for efï¬cient visual re-ranking,â IEEE Transactions on Image Processing, vol. 25, no. 3, pp. 1056â1069, 2016. | 1706.06064#150 | Recent Advance in Content-based Image Retrieval: A Literature Survey | The explosive increase and ubiquitous accessibility of visual data on the Web
have led to the prosperity of research activity in image search or retrieval.
With the ignorance of visual content as a ranking clue, methods with text
search techniques for visual retrieval may suffer inconsistency between the
text words and visual content. Content-based image retrieval (CBIR), which
makes use of the representation of visual content to identify relevant images,
has attracted sustained attention in recent two decades. Such a problem is
challenging due to the intention gap and the semantic gap problems. Numerous
techniques have been developed for content-based image retrieval in the last
decade. The purpose of this paper is to categorize and evaluate those
algorithms proposed during the period of 2003 to 2016. We conclude with several
promising directions for future research. | http://arxiv.org/pdf/1706.06064 | Wengang Zhou, Houqiang Li, Qi Tian | cs.MM, cs.IR | 22 pages | null | cs.MM | 20170619 | 20170902 | [
{
"id": "1504.03410"
}
] |
1706.06064 | 151 | [193] F. Yang, B. Matei, and L. S. Davis, âRe-ranking by multi-feature fusion with diffusion for image retrieval,â in IEEE Winter Confer- ence on Applications of Computer Vision (WACV). IEEE, 2015, pp. 572â579.
[194] X. Li, M. Larson, and A. Hanjalic, âPairwise geometric matching for large-scale object retrieval,â in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 5153â5161. [195] Y.-H. Kuo, K.-T. Chen, C.-H. Chiang, and W. H. Hsu, âQuery expansion for hash-based image object retrieval,â in ACM Inter- national Conference on Multimedia, 2009, pp. 65â74.
[196] O. Chum and J. Matas, âMatching with prosac-progressive sam- ple consensus,â in IEEE Conference on Computer Vision and Pattern Recognition, 2005, pp. 220â226.
[197] G. Tolias and Y. Avrithis, âHough pyramid matching: Speeded- up geometry re-ranking for large scale image retrieval,â in IEEE International Conference on Computer Vision (ICCV), 2011. | 1706.06064#151 | Recent Advance in Content-based Image Retrieval: A Literature Survey | The explosive increase and ubiquitous accessibility of visual data on the Web
have led to the prosperity of research activity in image search or retrieval.
With the ignorance of visual content as a ranking clue, methods with text
search techniques for visual retrieval may suffer inconsistency between the
text words and visual content. Content-based image retrieval (CBIR), which
makes use of the representation of visual content to identify relevant images,
has attracted sustained attention in recent two decades. Such a problem is
challenging due to the intention gap and the semantic gap problems. Numerous
techniques have been developed for content-based image retrieval in the last
decade. The purpose of this paper is to categorize and evaluate those
algorithms proposed during the period of 2003 to 2016. We conclude with several
promising directions for future research. | http://arxiv.org/pdf/1706.06064 | Wengang Zhou, Houqiang Li, Qi Tian | cs.MM, cs.IR | 22 pages | null | cs.MM | 20170619 | 20170902 | [
{
"id": "1504.03410"
}
] |
1706.06064 | 152 | [198] M. A. Fischler and R. C. Bolles, âRandom sample consensus: a paradigm for model ï¬tting with applications to image analysis and automated cartography,â Communications of the ACM, vol. 24, no. 6, pp. 381â395, 1981.
[199] Y. Avrithis and G. Tolias, âHough pyramid matching: Speeded- up geometry re-ranking for large scale image retrieval,â Interna- tional Journal of Computer Vision, vol. 107, no. 1, pp. 1â19, 2014.
[200] K. Grauman and T. Darrell, âThe pyramid match kernel: Dis- criminative classiï¬cation with sets of image features,â in IEEE International Conference on Computer Vision (ICCV), vol. 2. IEEE, 2005, pp. 1458â1465.
[201] W. Zhou, H. Li, Y. Lu, and Q. Tian, âSIFT match veriï¬cation by geometric coding for large-scale partial-duplicate web image search,â ACM Transactions on Multimedia Computing, Communica- tions, and Applications (TOMM), vol. 9, no. 1, p. 4, 2013. | 1706.06064#152 | Recent Advance in Content-based Image Retrieval: A Literature Survey | The explosive increase and ubiquitous accessibility of visual data on the Web
have led to the prosperity of research activity in image search or retrieval.
With the ignorance of visual content as a ranking clue, methods with text
search techniques for visual retrieval may suffer inconsistency between the
text words and visual content. Content-based image retrieval (CBIR), which
makes use of the representation of visual content to identify relevant images,
has attracted sustained attention in recent two decades. Such a problem is
challenging due to the intention gap and the semantic gap problems. Numerous
techniques have been developed for content-based image retrieval in the last
decade. The purpose of this paper is to categorize and evaluate those
algorithms proposed during the period of 2003 to 2016. We conclude with several
promising directions for future research. | http://arxiv.org/pdf/1706.06064 | Wengang Zhou, Houqiang Li, Qi Tian | cs.MM, cs.IR | 22 pages | null | cs.MM | 20170619 | 20170902 | [
{
"id": "1504.03410"
}
] |
1706.06064 | 153 | [202] L. Chu, S. Jiang, S. Wang, Y. Zhang, and Q. Huang, âRobust spatial consistency graph model for partial duplicate image re- trieval,â IEEE Transactions on Multimedia (TMM), vol. 15, no. 8, pp. 1982â1996, 2013.
[203] L. Xie, Q. Tian, W. Zhou, and B. Zhang, âFast and accurate near-duplicate image search with afï¬nity propagation on the imageweb,â Computer Vision and Image Understanding, vol. 124, pp. 31â41, 2014.
[204] J. M. Kleinberg, âAuthoritative sources in a hyperlinked environ- ment,â Journal of the ACM (JACM), vol. 46, no. 5, pp. 604â632, 1999.
[205] L. Xie, Q. Tian, W. Zhou, and B. Zhang, âHeterogeneous graph propagation for large-scale web image search,â IEEE Transactions on Image Processing (TIP), 2015. | 1706.06064#153 | Recent Advance in Content-based Image Retrieval: A Literature Survey | The explosive increase and ubiquitous accessibility of visual data on the Web
have led to the prosperity of research activity in image search or retrieval.
With the ignorance of visual content as a ranking clue, methods with text
search techniques for visual retrieval may suffer inconsistency between the
text words and visual content. Content-based image retrieval (CBIR), which
makes use of the representation of visual content to identify relevant images,
has attracted sustained attention in recent two decades. Such a problem is
challenging due to the intention gap and the semantic gap problems. Numerous
techniques have been developed for content-based image retrieval in the last
decade. The purpose of this paper is to categorize and evaluate those
algorithms proposed during the period of 2003 to 2016. We conclude with several
promising directions for future research. | http://arxiv.org/pdf/1706.06064 | Wengang Zhou, Houqiang Li, Qi Tian | cs.MM, cs.IR | 22 pages | null | cs.MM | 20170619 | 20170902 | [
{
"id": "1504.03410"
}
] |
1706.06064 | 154 | [206] H. Xie, Y. Zhang, J. Tan, L. Guo, and J. Li, âContextual query expansion for image retrieval,â IEEE Transactions on Multimedia (TMM), vol. 16, no. 4, pp. 1104â1114, 2014.
[207] D. Tao and X. Tang, âRandom sampling based svm for relevance feedback image retrieval,â in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2004.
[208] D. Tao, X. Tang, X. Li, and X. Wu, âAsymmetric bagging and random subspace for support vector machines-based relevance feedback in image retrieval,â IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), vol. 28, no. 7, pp. 1088â1099, 2006.
[209] S. C. Hoi, R. Jin, J. Zhu, and M. R. Lyu, âSemi-supervised svm batch mode active learning for image retrieval,â in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2008, pp. 1â7. | 1706.06064#154 | Recent Advance in Content-based Image Retrieval: A Literature Survey | The explosive increase and ubiquitous accessibility of visual data on the Web
have led to the prosperity of research activity in image search or retrieval.
With the ignorance of visual content as a ranking clue, methods with text
search techniques for visual retrieval may suffer inconsistency between the
text words and visual content. Content-based image retrieval (CBIR), which
makes use of the representation of visual content to identify relevant images,
has attracted sustained attention in recent two decades. Such a problem is
challenging due to the intention gap and the semantic gap problems. Numerous
techniques have been developed for content-based image retrieval in the last
decade. The purpose of this paper is to categorize and evaluate those
algorithms proposed during the period of 2003 to 2016. We conclude with several
promising directions for future research. | http://arxiv.org/pdf/1706.06064 | Wengang Zhou, Houqiang Li, Qi Tian | cs.MM, cs.IR | 22 pages | null | cs.MM | 20170619 | 20170902 | [
{
"id": "1504.03410"
}
] |
1706.06064 | 155 | [210] M. Arevalillo-Herr´aez and F. J. Ferri, âAn improved distance- based relevance feedback strategy for image retrieval,â Image and Vision Computing (IVC), vol. 31, no. 10, pp. 704â713, 2013. [211] E. Rabinovich, O. Rom, and O. Kurland, âUtilizing relevance feedback in fusion-based retrieval,â in International ACM SIGIR Conference on Research & Development in Information Retrieval (SI- GIR). ACM, 2014, pp. 313â322.
[212] X.-Y. Wang, Y.-W. Li, H.-Y. Yang, and J.-W. Chen, âAn image retrieval scheme with relevance feedback using feature recon- struction and svm reclassiï¬cation,â Neurocomputing, vol. 127, pp. 214â230, 2014. | 1706.06064#155 | Recent Advance in Content-based Image Retrieval: A Literature Survey | The explosive increase and ubiquitous accessibility of visual data on the Web
have led to the prosperity of research activity in image search or retrieval.
With the ignorance of visual content as a ranking clue, methods with text
search techniques for visual retrieval may suffer inconsistency between the
text words and visual content. Content-based image retrieval (CBIR), which
makes use of the representation of visual content to identify relevant images,
has attracted sustained attention in recent two decades. Such a problem is
challenging due to the intention gap and the semantic gap problems. Numerous
techniques have been developed for content-based image retrieval in the last
decade. The purpose of this paper is to categorize and evaluate those
algorithms proposed during the period of 2003 to 2016. We conclude with several
promising directions for future research. | http://arxiv.org/pdf/1706.06064 | Wengang Zhou, Houqiang Li, Qi Tian | cs.MM, cs.IR | 22 pages | null | cs.MM | 20170619 | 20170902 | [
{
"id": "1504.03410"
}
] |
1706.06064 | 156 | [213] K. Tieu and P. Viola, âBoosting image retrieval,â International Journal of Computer Vision (IJCV), vol. 56, no. 1-2, pp. 17â36, 2004. [214] J. Yu, D. Tao, M. Wang, and Y. Rui, âLearning to rank using user clicks and visual features for image retrieval,â IEEE Transactions on Cybernetics, vol. 45, no. 4, pp. 767â779, 2015.
[215] X. S. Zhou and T. S. Huang, âRelevance feedback in image retrieval: A comprehensive review,â Multimedia systems, vol. 8, no. 6, pp. 536â544, 2003.
[216] P. B. Patil and M. B. Kokare, âRelevance feedback in content based image retrieval: A review.â Journal of Applied Computer Science & Mathematics, no. 10, 2011.
[217] R. Fagin, R. Kumar, and D. Sivakumar, âEfï¬cient similarity search and classiï¬cation via rank aggregation,â in ACM SIGMOD International Conference on Management of Data. ACM, 2003, pp. 301â312. | 1706.06064#156 | Recent Advance in Content-based Image Retrieval: A Literature Survey | The explosive increase and ubiquitous accessibility of visual data on the Web
have led to the prosperity of research activity in image search or retrieval.
With the ignorance of visual content as a ranking clue, methods with text
search techniques for visual retrieval may suffer inconsistency between the
text words and visual content. Content-based image retrieval (CBIR), which
makes use of the representation of visual content to identify relevant images,
has attracted sustained attention in recent two decades. Such a problem is
challenging due to the intention gap and the semantic gap problems. Numerous
techniques have been developed for content-based image retrieval in the last
decade. The purpose of this paper is to categorize and evaluate those
algorithms proposed during the period of 2003 to 2016. We conclude with several
promising directions for future research. | http://arxiv.org/pdf/1706.06064 | Wengang Zhou, Houqiang Li, Qi Tian | cs.MM, cs.IR | 22 pages | null | cs.MM | 20170619 | 20170902 | [
{
"id": "1504.03410"
}
] |
1706.06064 | 157 | [218] L. Page, S. Brin, R. Motwani, and T. Winograd, âThe pagerank citation ranking: bringing order to the web.â 1999.
[219] G. Ye, D. Liu, I.-H. Jhuo, S.-F. Chang et al., âRobust late fusion with rank minimization,â in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2012, pp. 3021â3028.
[220] S. Romberg, L. G. Pueyo, R. Lienhart, and R. Van Zwol, âScalable logo recognition in real-world images,â in ACM International Conference on Multimedia Retrieval (ICMR). ACM, 2011, p. 25.
[221] S. Wang and S. Jiang, âInstre: a new benchmark for instance-level object retrieval and recognition,â ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), vol. 11, no. 3, p. 37, 2015. | 1706.06064#157 | Recent Advance in Content-based Image Retrieval: A Literature Survey | The explosive increase and ubiquitous accessibility of visual data on the Web
have led to the prosperity of research activity in image search or retrieval.
With the ignorance of visual content as a ranking clue, methods with text
search techniques for visual retrieval may suffer inconsistency between the
text words and visual content. Content-based image retrieval (CBIR), which
makes use of the representation of visual content to identify relevant images,
has attracted sustained attention in recent two decades. Such a problem is
challenging due to the intention gap and the semantic gap problems. Numerous
techniques have been developed for content-based image retrieval in the last
decade. The purpose of this paper is to categorize and evaluate those
algorithms proposed during the period of 2003 to 2016. We conclude with several
promising directions for future research. | http://arxiv.org/pdf/1706.06064 | Wengang Zhou, Houqiang Li, Qi Tian | cs.MM, cs.IR | 22 pages | null | cs.MM | 20170619 | 20170902 | [
{
"id": "1504.03410"
}
] |
1706.06064 | 158 | [222] V. R. Chandrasekhar, D. M. Chen, S. S. Tsai, N.-M. Cheung, H. Chen, G. Takacs, Y. Reznik, R. Vedantham, R. Grzeszczuk, J. Bach et al., âThe stanford mobile visual search data set,â in ACM conference on Multimedia Systems. ACM, 2011, pp. 117â122. [223] X. Tian, Y. Lu, L. Yang, and Q. Tian, âLearning to judge image search results,â in ACM International Conference on Multimedia (MM). ACM, 2011, pp. 363â372.
[224] X. Tian, Q. Jia, and T. Mei, âQuery difï¬culty estimation for image search with query reconstruction error,â IEEE Transactions on Multimedia (TMM), vol. 17, no. 1, pp. 79â91, 2015.
[225] K. He, X. Zhang, S. Ren, and J. Sun, âSpatial pyramid pooling in deep convolutional networks for visual recognition,â in European Conference on Computer Vision (ECCV). Springer, 2014, pp. 346â 361.
22 | 1706.06064#158 | Recent Advance in Content-based Image Retrieval: A Literature Survey | The explosive increase and ubiquitous accessibility of visual data on the Web
have led to the prosperity of research activity in image search or retrieval.
With the ignorance of visual content as a ranking clue, methods with text
search techniques for visual retrieval may suffer inconsistency between the
text words and visual content. Content-based image retrieval (CBIR), which
makes use of the representation of visual content to identify relevant images,
has attracted sustained attention in recent two decades. Such a problem is
challenging due to the intention gap and the semantic gap problems. Numerous
techniques have been developed for content-based image retrieval in the last
decade. The purpose of this paper is to categorize and evaluate those
algorithms proposed during the period of 2003 to 2016. We conclude with several
promising directions for future research. | http://arxiv.org/pdf/1706.06064 | Wengang Zhou, Houqiang Li, Qi Tian | cs.MM, cs.IR | 22 pages | null | cs.MM | 20170619 | 20170902 | [
{
"id": "1504.03410"
}
] |
1706.05587 | 1 | # Abstract
In this work, we revisit atrous convolution, a powerful tool to explicitly adjust ï¬lterâs ï¬eld-of-view as well as control the resolution of feature responses computed by Deep Convolu- tional Neural Networks, in the application of semantic image segmentation. To handle the problem of segmenting objects at multiple scales, we design modules which employ atrous convolution in cascade or in parallel to capture multi-scale context by adopting multiple atrous rates. Furthermore, we propose to augment our previously proposed Atrous Spatial Pyramid Pooling module, which probes convolutional fea- tures at multiple scales, with image-level features encoding global context and further boost performance. We also elab- orate on implementation details and share our experience on training our system. The proposed âDeepLabv3â system signiï¬cantly improves over our previous DeepLab versions without DenseCRF post-processing and attains comparable performance with other state-of-art models on the PASCAL VOC 2012 semantic image segmentation benchmark.
# 1. Introduction | 1706.05587#1 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 2 | # 1. Introduction
For the task of semantic segmentation [20, 63, 14, 97, 7], we consider two challenges in applying Deep Convolutional Neural Networks (DCNNs) [50]. The ï¬rst one is the reduced feature resolution caused by consecutive pooling operations or convolution striding, which allows DCNNs to learn in- creasingly abstract feature representations. However, this invariance to local image transformation may impede dense prediction tasks, where detailed spatial information is de- sired. To overcome this problem, we advocate the use of atrous convolution [36, 26, 74, 66], which has been shown to be effective for semantic image segmentation [10, 90, 11]. Atrous convolution, also known as dilated convolution, al- lows us to repurpose ImageNet [72] pretrained networks to extract denser feature maps by removing the downsam- pling operations from the last few layers and upsampling the corresponding ï¬lter kernels, equivalent to inserting holes (âtrousâ in French) between ï¬lter weights. With atrous convo- lution, one is able to control the resolution at which feature
Conv Conv Conv kernel: 3x3 kernel: 3x3 kemel: 3x3 rate: 1 rate: 6 rate: 24 rate = 24 â rate = 6 rate=1 â_ Feature map Feature map Feature map | 1706.05587#2 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 4 | responses are computed within DCNNs without requiring learning extra parameters.
Another difï¬culty comes from the existence of objects at multiple scales. Several methods have been proposed to handle the problem and we mainly consider four categories in this work, as illustrated in Fig. 2. First, the DCNN is applied to an image pyramid to extract features for each scale input [22, 19, 69, 55, 12, 11] where objects at different scales become prominent at different feature maps. Sec- ond, the encoder-decoder structure [3, 71, 25, 54, 70, 68, 39] exploits multi-scale features from the encoder part and re- covers the spatial resolution from the decoder part. Third, extra modules are cascaded on top of the original network for capturing long range information. In particular, DenseCRF [45] is employed to encode pixel-level pairwise similarities [10, 96, 55, 73], while [59, 90] develop several extra convo- lutional layers in cascade to gradually capture long range context. Fourth, spatial pyramid pooling [11, 95] probes an incoming feature map with ï¬lters or pooling operations at multiple rates and multiple effective ï¬eld-of-views, thus capturing objects at multiple scales. | 1706.05587#4 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 5 | In this work, we revisit applying atrous convolution, which allows us to effectively enlarge the ï¬eld of view of ï¬lters to incorporate multi-scale context, in the framework of both cascaded modules and spatial pyramid pooling. In par- ticular, our proposed module consists of atrous convolution with various rates and batch normalization layers which we
1
LW Qa a âKe t | 2up [erp ms 2x up rT LF LOY Image Scale1 Image Scale 2 Image Small Resolution AY ZT t t â__ a AlZTy Spatial Pyramid Pooling âee a convolution â7 oar mall AY Image Image Image
(a) Image Pyramid
(b) Encoder-Decoder (c) Deeper w. Atrous Convolution (d) Spatial Pyramid Pooling Figure 2. Alternative architectures to capture multi-scale context. | 1706.05587#5 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 6 | (b) Encoder-Decoder (c) Deeper w. Atrous Convolution (d) Spatial Pyramid Pooling Figure 2. Alternative architectures to capture multi-scale context.
found important to be trained as well. We experiment with laying out the modules in cascade or in parallel (speciï¬cally, Atrous Spatial Pyramid Pooling (ASPP) method [11]). We discuss an important practical issue when applying a 3 à 3 atrous convolution with an extremely large rate, which fails to capture long range information due to image boundary effects, effectively simply degenerating to 1 à 1 convolu- tion, and propose to incorporate image-level features into the ASPP module. Furthermore, we elaborate on imple- mentation details and share experience on training the pro- posed models, including a simple yet effective bootstrapping method for handling rare and ï¬nely annotated objects. In the end, our proposed model, âDeepLabv3â improves over our previous works [10, 11] and attains performance of 85.7% on the PASCAL VOC 2012 test set without DenseCRF post- processing. | 1706.05587#6 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 7 | the encoder where the spatial dimension of feature maps is gradually reduced and thus longer range information is more easily captured in the deeper encoder output, and (b) the decoder where object details and spatial dimension are gradually recovered. For example, [60, 64] employ deconvo- lution [92] to learn the upsampling of low resolution feature responses. SegNet [3] reuses the pooling indices from the encoder and learn extra convolutional layers to densify the feature responses, while U-Net [71] adds skip connections from the encoder features to the corresponding decoder acti- vations, and [25] employs a Laplacian pyramid reconstruc- tion network. More recently, Reï¬neNet [54] and [70, 68, 39] have demonstrated the effectiveness of models based on encoder-decoder structure on several semantic segmentation benchmarks. This type of model is also explored in the context of object detection [56, 77].
# 2. Related Work | 1706.05587#7 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 8 | # 2. Related Work
It has been shown that global features or contextual in- teractions [33, 76, 43, 48, 27, 89] are beneï¬cial in cor- In rectly classifying pixels for semantic segmentation. this work, we discuss four types of Fully Convolutional Networks (FCNs) [74, 60] (see Fig. 2 for illustration) that exploit context information for semantic segmentation [30, 15, 62, 9, 96, 55, 65, 73, 87].
Image pyramid: The same model, typically with shared weights, is applied to multi-scale inputs. Feature responses from the small scale inputs encode the long-range context, while the large scale inputs preserve the small object details. Typical examples include Farabet et al. [22] who transform the input image through a Laplacian pyramid, feed each scale input to a DCNN and merge the feature maps from all the scales. [19, 69] apply multi-scale inputs sequentially from coarse-to-ï¬ne, while [55, 12, 11] directly resize the input for several scales and fuse the features from all the scales. The main drawback of this type of models is that it does not scale well for larger/deeper DCNNs (e.g., networks like [32, 91, 86]) due to limited GPU memory and thus it is usually applied during the inference stage [16]. | 1706.05587#8 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 9 | Context module: This model contains extra modules laid out in cascade to encode long-range context. One ef- fective method is to incorporate DenseCRF [45] (with efï¬- cient high-dimensional ï¬ltering algorithms [2]) to DCNNs [10, 11]. Furthermore, [96, 55, 73] propose to jointly train both the CRF and DCNN components, while [59, 90] em- ploy several extra convolutional layers on top of the belief maps of DCNNs (belief maps are the ï¬nal DCNN feature maps that contain output channels equal to the number of predicted classes) to capture context information. Recently, [41] proposes to learn a general and sparse high-dimensional convolution (bilateral convolution), and [82, 8] combine Gaussian Conditional Random Fields and DCNNs for se- mantic segmentation. | 1706.05587#9 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 10 | Spatial pyramid pooling: This model employs spatial pyramid pooling [28, 49] to capture context at several ranges. The image-level features are exploited in ParseNet [58] for global context information. DeepLabv2 [11] proposes atrous spatial pyramid pooling (ASPP), where parallel atrous con- volution layers with different rates capture multi-scale infor- mation. Recently, Pyramid Scene Parsing Net (PSP) [95] performs spatial pooling at several grid scales and demon- strates outstanding performance on several semantic segmen- tation benchmarks. There are other methods based on LSTM
Encoder-decoder: This model consists of two parts: (a)
[35] to aggregate global context [53, 6, 88]. Spatial pyramid pooling has also been applied in object detection [31]. | 1706.05587#10 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 11 | [35] to aggregate global context [53, 6, 88]. Spatial pyramid pooling has also been applied in object detection [31].
In this work, we mainly explore atrous convolution [36, 26, 74, 66, 10, 90, 11] as a context module and tool for spatial pyramid pooling. Our proposed framework is general in the sense that it could be applied to any network. To be concrete, we duplicate several copies of the original last block in ResNet [32] and arrange them in cascade, and also revisit the ASPP module [11] which contains several atrous convolutions in parallel. Note that our cascaded mod- ules are applied directly on the feature maps instead of belief maps. For the proposed modules, we experimentally ï¬nd it important to train with batch normalization [38]. To further capture global context, we propose to augment ASPP with image-level features, similar to [58, 95]. | 1706.05587#11 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 12 | Atrous convolution: Models based on atrous convolu- tion have been actively explored for semantic segmentation. For example, [85] experiments with the effect of modify- ing atrous rates for capturing long-range information, [84] adopts hybrid atrous rates within the last two blocks of ResNet, while [18] further proposes to learn the deformable convolution which samples the input features with learned offset, generalizing atrous convolution. To further improve the segmentation model accuracy, [83] exploits image cap- tions, [40] utilizes video motion, and [44] incorporates depth information. Besides, atrous convolution has been applied to object detection by [66, 17, 37].
# 3. Methods
In this section, we review how atrous convolution is ap- plied to extract dense features for semantic segmentation. We then discuss the proposed modules with atrous convolu- tion modules employed in cascade or in parallel.
# 3.1. Atrous Convolution for Dense Feature Extrac- tion | 1706.05587#12 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 13 | # 3.1. Atrous Convolution for Dense Feature Extrac- tion
Deep Convolutional Neural Networks (DCNNs) [50] de- ployed in fully convolutional fashion [74, 60] have shown to be effective for the task of semantic segmentation. However, the repeated combination of max-pooling and striding at consecutive layers of these networks signiï¬cantly reduces the spatial resolution of the resulting feature maps, typically by a factor of 32 across each direction in recent DCNNs [47, 78, 32]. Deconvolutional layers (or transposed convolu- tion) [92, 60, 64, 3, 71, 68] have been employed to recover the spatial resolution. Instead, we advocate the use of âatrous convolutionâ, originally developed for the efï¬cient computa- tion of the undecimated wavelet transform in the âalgorithme `a trousâ scheme of [36] and used before in the DCNN context by [26, 74, 66].
Consider two-dimensional signals, for each location i on the output y and a ï¬lter w, atrous convolution is applied over the input feature map x:
y[i] = x[i + r · k]w[k] k (1) | 1706.05587#13 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 14 | y[i] = x[i + r · k]w[k] k (1)
where the atrous rate r corresponds to the stride with which we sample the input signal, which is equivalent to convolving the input x with upsampled ï¬lters produced by inserting r â 1 zeros between two consecutive ï¬lter values along each spatial dimension (hence the name atrous convolution where the French word trous means holes in English). Standard convolution is a special case for rate r = 1, and atrous convolution allows us to adaptively modify ï¬lterâs ï¬eld-of- view by changing the rate value. See Fig. 1 for illustration. | 1706.05587#14 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 15 | Atrous convolution also allows us to explicitly control how densely to compute feature responses in fully convolu- tional networks. Here, we denote by output stride the ratio of input image spatial resolution to ï¬nal output resolution. For the DCNNs [47, 78, 32] deployed for the task of image classiï¬cation, the ï¬nal feature responses (before fully con- nected layers or global pooling) is 32 times smaller than the input image dimension, and thus output stride = 32. If one would like to double the spatial density of computed fea- ture responses in the DCNNs (i.e., output stride = 16), the stride of last pooling or convolutional layer that decreases resolution is set to 1 to avoid signal decimation. Then, all subsequent convolutional layers are replaced with atrous convolutional layers having rate r = 2. This allows us to extract denser feature responses without requiring learning any extra parameters. Please refer to [11] for more details.
# 3.2. Going Deeper with Atrous Convolution | 1706.05587#15 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 16 | # 3.2. Going Deeper with Atrous Convolution
We ï¬rst explore designing modules with atrous convolu- tion laid out in cascade. To be concrete, we duplicate several copies of the last ResNet block, denoted as block4 in Fig. 3, and arrange them in cascade. There are three 3 à 3 convolu- tions in those blocks, and the last convolution contains stride 2 except the one in last block, similar to original ResNet. The motivation behind this model is that the introduced strid- ing makes it easy to capture long range information in the deeper blocks. For example, the whole image feature could be summarized in the last small resolution feature map, as illustrated in Fig. 3 (a). However, we discover that the con- secutive striding is harmful for semantic segmentation (see Tab. 1 in Sec. 4) since detail information is decimated, and thus we apply atrous convolution with rates determined by the desired output stride value, as shown in Fig. 3 (b) where output stride = 16.
In this proposed model, we experiment with cascaded ResNet blocks up to block7 (i.e., extra block5, block6, block7 as replicas of block4), which has output stride = 256 if no atrous convolution is applied. | 1706.05587#16 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 17 | Convi Pool1 output Image stride 4 Block1 Block2 Block3 Block4 BlockS Block6 > > > > Uâo 8 16 32 Block7 > oO 64 128 256 256
(a) Going deeper without atrous convolution.
Convi Pool1 rate=2 rate=4 rate=8 rate=16 output Image stride 4 Block1 Block2 Block3 Block4 BlockS Block6 Block7 | - EI â| 8 16 16 16 16 16 16
(b) Going deeper with atrous convolution. Atrous convolution with rate > 1 is applied after block3 when output stride = 16. Figure 3. Cascaded modules without and with atrous convolution.
# 3.2.1 Multi-grid Method | 1706.05587#17 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 18 | # 3.2.1 Multi-grid Method
Motivated by multi-grid methods which employ a hierar- chy of grids of different sizes [4, 81, 5, 67] and following [84, 18], we adopt different atrous rates within block4 to block7 in the proposed model. In particular, we deï¬ne as Multi Grid = (r1, r2, r3) the unit rates for the three convo- lutional layers within block4 to block7. The ï¬nal atrous rate for the convolutional layer is equal to the multiplication of the unit rate and the corresponding rate. For example, when output stride = 16 and Multi Grid = (1, 2, 4), the three convolutions will have rates = 2 · (1, 2, 4) = (2, 4, 8) in the block4, respectively.
# 3.3. Atrous Spatial Pyramid Pooling
id i") id (2) ° eS Normalized count ° io ââ1 valid weight â=-4 valid weights â=â9 valid weights % 20 40 60 80 atrous rate | 1706.05587#18 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 19 | id i") id (2) ° eS Normalized count ° io ââ1 valid weight â=-4 valid weights â=â9 valid weights % 20 40 60 80 atrous rate
Figure 4. Normalized counts of valid weights with a 3 à 3 ï¬lter on a 65 à 65 feature map as atrous rate varies. When atrous rate is small, all the 9 ï¬lter weights are applied to most of the valid region on feature map, while atrous rate gets larger, the 3 à 3 ï¬lter degenerates to a 1 à 1 ï¬lter since only the center weight is effective.
We revisit the Atrous Spatial Pyramid Pooling proposed in [11], where four parallel atrous convolutions with different atrous rates are applied on top of the feature map. ASPP is inspired by the success of spatial pyramid pooling [28, 49, 31] which showed that it is effective to resample features at different scales for accurately and efï¬ciently classifying regions of an arbitrary scale. Different from [11], we include batch normalization within ASPP. | 1706.05587#19 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 20 | ASPP with different atrous rates effectively captures multi-scale information. However, we discover that as the sampling rate becomes larger, the number of valid ï¬lter weights (i.e., the weights that are applied to the valid fea- ture region, instead of padded zeros) becomes smaller. This effect is illustrated in Fig. 4 when applying a 3 à 3 ï¬lter to a 65 à 65 feature map with different atrous rates. In the extreme case where the rate value is close to the feature map size, the 3 à 3 ï¬lter, instead of capturing the whole image context, degenerates to a simple 1 à 1 ï¬lter since only the center ï¬lter weight is effective. | 1706.05587#20 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 21 | pooling on the last feature map of the model, feed the re- sulting image-level features to a 1 à 1 convolution with 256 ï¬lters (and batch normalization [38]), and then bilinearly upsample the feature to the desired spatial dimension. In the end, our improved ASPP consists of (a) one 1Ã1 convolution and three 3 à 3 convolutions with rates = (6, 12, 18) when output stride = 16 (all with 256 ï¬lters and batch normaliza- tion), and (b) the image-level features, as shown in Fig. 5. Note that the rates are doubled when output stride = 8. The resulting features from all the branches are then concate- nated and pass through another 1 à 1 convolution (also with 256 ï¬lters and batch normalization) before the ï¬nal 1 à 1 convolution which generates the ï¬nal logits.
# 4. Experimental Evaluation
To overcome this problem and incorporate global context information to the model, we adopt image-level features, similar to [58, 95]. Speciï¬cally, we apply global average | 1706.05587#21 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 22 | To overcome this problem and incorporate global context information to the model, we adopt image-level features, similar to [58, 95]. Speciï¬cally, we apply global average
We adapt the ImageNet-pretrained [72] ResNet [32] to the semantic segmentation by applying atrous convolution to extract dense features. Recall that output stride is deï¬ned as the ratio of input image spatial resolution to ï¬nal outConvi + Pool1 Block1 Block2 Block3 >| â output Image stride 4 8 16 (a) Atrous Spatial Pyramid Pooling = 1x1 Conv rate=2 fea 3x3 Conv Concat rate=6 + Block4 3x3Conv | 1x1 Conv |â_â__» rate=12 ââ 3x3 Conv rate=18 16 16 (b) Image Pooling
Figure 5. Parallel modules with atrous convolution (ASPP), augmented with image-level features.
put resolution. For example, when output stride = 8, the last two blocks (block3 and block4 in our notation) in the original ResNet contains atrous convolution with rate = 2 and rate = 4 respectively. Our implementation is built on TensorFlow [1]. | 1706.05587#22 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 23 | We evaluate the proposed models on the PASCAL VOC 2012 semantic segmentation benchmark [20] which con- tains 20 foreground object classes and one background class. The original dataset contains 1, 464 (train), 1, 449 (val), and 1, 456 (test) pixel-level labeled images for training, valida- tion, and testing, respectively. The dataset is augmented by the extra annotations provided by [29], resulting in 10, 582 (trainaug) training images. The performance is measured in terms of pixel intersection-over-union (IOU) averaged across the 21 classes.
output stride 8 16 32 64 128 256 mIOU 75.18 73.88 70.06 59.99 42.34 20.29
Table 1. Going deeper with atrous convolution when employing ResNet-50 with block7 and different output stride. Adopting output stride = 8 leads to better performance at the cost of more memory usage.
convolution allows us to control output stride value at dif- ferent training stages without requiring learning extra model parameters. Also note that training with output stride = 16 is several times faster than output stride = 8 since the inter- mediate feature maps are spatially four times smaller, but at a sacriï¬ce of accuracy since output stride = 16 provides coarser feature maps.
# 4.1. Training Protocol | 1706.05587#23 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 24 | # 4.1. Training Protocol
In this subsection, we discuss details of our training pro- tocol.
Learning rate policy: Similar to [58, 11], we employ a âpolyâ learning rate policy where the initial learning rate is multiplied by (1 â iter
Crop size: Following the original training protocol [10, 11], patches are cropped from the image during training. For atrous convolution with large rates to be effective, large crop size is required; otherwise, the ï¬lter weights with large atrous rate are mostly applied to the padded zero region. We thus employ crop size to be 513 during both training and test on PASCAL VOC 2012 dataset. | 1706.05587#24 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 25 | Batch normalization: Our added modules on top of ResNet all include batch normalization parameters [38], which we found important to be trained as well. Since large batch size is required to train batch normalization parame- ters, we employ output stride = 16 and compute the batch normalization statistics with a batch size of 16. The batch normalization parameters are trained with decay = 0.9997. After training on the trainaug set with 30K iterations and ini- tial learning rate = 0.007, we then freeze batch normalization parameters, employ output stride = 8, and train on the ofï¬- cial PASCAL VOC 2012 trainval set for another 30K itera- tions and smaller base learning rate = 0.001. Note that atrous
Upsampling logits: In our previous works [10, 11], the target groundtruths are downsampled by 8 during training when output stride = 8. We ï¬nd it important to keep the groundtruths intact and instead upsample the ï¬nal logits, since downsampling the groundtruths removes the ï¬ne anno- tations resulting in no back-propagation of details.
Data augmentation: We apply data augmentation by randomly scaling the input images (from 0.5 to 2.0) and randomly left-right ï¬ipping during training. | 1706.05587#25 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 26 | Data augmentation: We apply data augmentation by randomly scaling the input images (from 0.5 to 2.0) and randomly left-right ï¬ipping during training.
# 4.2. Going Deeper with Atrous Convolution
We ï¬rst experiment with building more blocks with atrous convolution in cascade.
ResNet-50: In Tab. 1, we experiment with the effect of output stride when employing ResNet-50 with block7 (i.e., extra block5, block6, and block7). As shown in the table, in the case of output stride = 256 (i.e., no atrous convolution at all), the performance is much worse than the others due to the severe signal decimation. When output stride gets larger and apply atrous convolution correspondingly, the performance improves from 20.29% to 75.18%, showing that atrous convolution is essential when building more blocks cascadedly for semantic segmentation.
ResNet-50 vs. ResNet-101: We replace ResNet-50 with deeper network ResNet-101 and change the number of cas- caded blocks. As shown in Tab. 2, the performance improves
Network block4 block5 block6 block7 ResNet-50 ResNet-101 64.81 68.39 72.14 73.21 74.29 75.34 73.88 75.76 | 1706.05587#26 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 27 | Table 2. Going deeper with atrous convolution when employ- ing ResNet-50 and ResNet-101 with different number of cas- caded blocks at output stride = 16. Network structures âblock4â, âblock5â, âblock6â, and âblock7â add extra 0, 1, 2, 3 cascaded modules respectively. The performance is generally improved by adopting more cascaded blocks.
Multi-Grid block4 block5 block6 block7 (1, 1, 1) (1, 2, 1) (1, 2, 3) (1, 2, 4) (2, 2, 2) 68.39 70.23 73.14 73.45 71.45 73.21 75.67 75.78 75.74 74.30 75.34 76.09 75.96 75.85 74.70 75.76 76.66 76.11 76.02 74.62
Table 3. Employing multi-grid method for ResNet-101 with dif- ferent number of cascaded blocks at output stride = 16. The best model performance is shown in bold.
as more blocks are added, but the margin of improvement becomes smaller. Noticeably, employing block7 to ResNet- 50 decreases slightly the performance while it still improves the performance for ResNet-101. | 1706.05587#27 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 28 | Multi-grid: We apply the multi-grid method to ResNet- 101 with several cascadedly added blocks in Tab. 3. The unit rates, Multi Grid = (r1, r2, r3), are applied to block4 and all the other added blocks. As shown in the table, we observe that (a) applying multi-grid method is generally better than the vanilla version where (r1, r2, r3) = (1, 1, 1), (b) simply doubling the unit rates (i.e., (r1, r2, r3) = (2, 2, 2)) is not effective, and (c) going deeper with multi-grid improves the performance. Our best model is the case where block7 and (r1, r2, r3) = (1, 2, 1) are employed. | 1706.05587#28 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 29 | Inference strategy on val set: The proposed model is trained with output stride = 16, and then during inference we apply output stride = 8 to get more detailed feature map. As shown in Tab. 4, interestingly, when evaluating our best cascaded model with output stride = 8, the per- formance improves over evaluating with output stride = 16 by 1.39%. The performance is further improved by per- forming inference on multi-scale inputs (with scales = {0.5, 0.75, 1.0, 1.25, 1.5, 1.75}) and also left-right ï¬ipped images. In particular, we compute as the ï¬nal result the average probabilities from each scale and ï¬ipped images.
# 4.3. Atrous Spatial Pyramid Pooling
We then experiment with the Atrous Spatial Pyramid Pooling (ASPP) module with the main differences from [11] being that batch normalization parameters [38] are ï¬ne-tuned and image-level features are included.
Method OS=16 OS=8 MS _ Flip | mlOU block7 + v 76.66 MG(1, 2, 1) v 78.05 v v 78.93 v v v 79.35 | 1706.05587#29 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 30 | Table 4. Inference strategy on the val set. MG: Multi-grid. OS: output stride. MS: Multi-scale inputs during test. Flip: Adding left-right ï¬ipped inputs.
Multi-Grid ASPP Image d,1,1) 2,1) (,2,4) | (6,12, 18) (6, 12, 18, 24) | Pooling | mlOU v v 75.36 v v 75.93 v v 76.58 v v 76.46 v v v 77.21
Table 5. Atrous Spatial Pyramid Pooling with multi-grid method and image-level features at output stride = 16.
Method OS=16 OS=8 MS Flip COCO | mlOU MG(1, 2, 4) + v 77.21 ASPP(6, 12, 18) + v 78.51 Image Pooling v v 79.45 v v v 79.77 v v v v 82.70
Table 6. Inference strategy on the val set: MG: Multi-grid. ASPP: Atrous spatial pyramid pooling. OS: output stride. MS: Multi- scale inputs during test. Flip: Adding left-right ï¬ipped inputs. COCO: Model pretrained on MS-COCO. | 1706.05587#30 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 31 | ASPP: In Tab. 5, we experiment with the effect of in- corporating multi-grid in block4 and image-level features to the improved ASPP module. We ï¬rst ï¬x ASP P = (6, 12, 18) (i.e., employ rates = (6, 12, 18) for the three parallel 3 à 3 convolution branches), and vary the multi- grid value. Employing Multi Grid = (1, 2, 1) is better than Multi Grid = (1, 1, 1), while further improvement is attained by adopting Multi Grid = (1, 2, 4) in the con- text of ASP P = (6, 12, 18) (cf ., the âblock4â column in Tab. 3). If we additionally employ another parallel branch with rate = 24 for longer range context, the performance drops slightly by 0.12%. On the other hand, augmenting the ASPP module with image-level feature is effective, reaching the ï¬nal performance of 77.21%. | 1706.05587#31 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 32 | Inference strategy on val set: Similarly, we apply output stride = 8 during inference once the model is trained. As shown in Tab. 6, employing output stride = 8 brings 1.3% improvement over using output stride = 16, adopting multi-scale inputs and adding left-right ï¬ipped images fur- ther improve the performance by 0.94% and 0.32%, respec- tively. The best model with ASPP attains the performance of 79.77%, better than the best model with cascaded atrous convolution modules (79.35%), and thus is selected as our ï¬nal model for test set evaluation.
Comparison with DeepLabv2: Both our best cascaded
model (in Tab. 4) and ASPP model (in Tab. 6) (in both cases without DenseCRF post-processing or MS-COCO pre-training) already outperform DeepLabv2 (77.69% with DenseCRF and pretrained on MS-COCO in Tab. 4 of [11]) on the PASCAL VOC 2012 val set. The improvement mainly comes from including and ï¬ne-tuning batch normalization parameters [38] in the proposed models and having a better way to encode multi-scale context.
Appendix: We show more experimental results, such as the effect of hyper parameters and Cityscapes [14] results, in the appendix. | 1706.05587#32 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 33 | Appendix: We show more experimental results, such as the effect of hyper parameters and Cityscapes [14] results, in the appendix.
Qualitative results: We provide qualitative visual results of our best ASPP model in Fig. 6. As shown in the ï¬gure, our model is able to segment objects very well without any DenseCRF post-processing.
Failure mode: As shown in the bottom row of Fig. 6, our model has difï¬culty in segmenting (a) sofa vs. chair, (b) dining table and chair, and (c) rare view of objects. | 1706.05587#33 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 34 | Pretrained on COCO: For comparison with other state- of-art models, we further pretrain our best ASPP model on MS-COCO dataset [57]. From the MS-COCO train- val minus minival set, we only select the images that have annotation regions larger than 1000 pixels and contain the classes deï¬ned in PASCAL VOC 2012, resulting in about 60K images for training. Besides, the MS-COCO classes not deï¬ned in PASCAL VOC 2012 are all treated as back- ground class. After pretraining on MS-COCO dataset, our proposed model attains performance of 82.7% on val set when using output stride = 8, multi-scale inputs and adding left-right ï¬ipped images during inference. We adopt smaller initial learning rate = 0.0001 and same training protocol as in Sec. 4.1 when ï¬ne-tuning on PASCAL VOC 2012 dataset. Test set result and an effective bootstrapping method: We notice that PASCAL VOC 2012 dataset provides higher quality of annotations than the augmented dataset [29], es- pecially for the bicycle class. We thus further ï¬ne-tune our model on the | 1706.05587#34 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 35 | higher quality of annotations than the augmented dataset [29], es- pecially for the bicycle class. We thus further ï¬ne-tune our model on the ofï¬cial PASCAL VOC 2012 trainval set be- fore evaluating on the test set. Speciï¬cally, our model is trained with output stride = 8 (so that annotation details are kept) and the batch normalization parameters are frozen (see Sec. 4.1 for details). Besides, instead of performing pixel hard example mining as [85, 70], we resort to bootstrapping on hard images. In particular, we duplicate the images that contain hard classes (namely bicycle, chair, table, potted- plant, and sofa) in the training set. As shown in Fig. 7, the simple bootstrapping method is effective for segmenting the bicycle class. In the end, our âDeepLabv3â achieves the per- formance of 85.7% on the test set without any DenseCRF post-processing, as shown in Tab. 7. | 1706.05587#35 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 36 | Model pretrained on JFT-300M: Motivated by the re- cent work of [79], we further employ the ResNet-101 model which has been pretraind on both ImageNet and the JFT- 300M dataset [34, 13, 79], resulting in a performance of
Method mIOU Adelaide VeryDeep FCN VOC [85] LRR 4x ResNet-CRF [25] DeepLabv2-CRF [11] CentraleSupelec Deep G-CRF [8] HikSeg COCO [80] SegModel [75] Deep Layer Cascade (LC) [52] TuSimple [84] Large Kernel Matters [68] Multipath-Reï¬neNet [54] ResNet-38 MS COCO [86] PSPNet [95] IDW-CNN [83] CASIA IVA SDN [23] DIS [61] 79.1 79.3 79.7 80.2 81.4 81.8 82.7 83.1 83.6 84.2 84.9 85.4 86.3 86.6 86.8 85.7 86.9
DeepLabv3 DeepLabv3-JFT Table 7. Performance on PASCAL VOC 2012 test set.
# 86.9% on PASCAL VOC 2012 test set.
# 5. Conclusion | 1706.05587#36 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 37 | # 86.9% on PASCAL VOC 2012 test set.
# 5. Conclusion
Our proposed model âDeepLabv3â employs atrous con- volution with upsampled ï¬lters to extract dense feature maps and to capture long range context. Speciï¬cally, to encode multi-scale information, our proposed cascaded module grad- ually doubles the atrous rates while our proposed atrous spa- tial pyramid pooling module augmented with image-level features probes the features with ï¬lters at multiple sampling rates and effective ï¬eld-of-views. Our experimental results show that the proposed model signiï¬cantly improves over previous DeepLab versions and achieves comparable perfor- mance with other state-of-art models on the PASCAL VOC 2012 semantic image segmentation benchmark.
Acknowledgments We would like to acknowledge valu- able discussions with Zbigniew Wojna, the help from Chen Sun and Andrew Howard, and the support from Google Mobile Vision team.
# A. Effect of hyper-parameters
In this section, we follow the same training protocol as in the main paper and experiment with the effect of some hyper-parameters. | 1706.05587#37 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 38 | # A. Effect of hyper-parameters
In this section, we follow the same training protocol as in the main paper and experiment with the effect of some hyper-parameters.
New training protocol: As mentioned in the main paper, we change the training protocol in [10, 11] with three main differences: (1) larger crop size, (2) upsampling logits during training, and (3) ï¬ne-tuning batch normalization. Here, we quantitatively measure the effect of the changes. As shown
| le.
Figure 6. Visualization results on the val set when employing our best ASPP model. The last row shows a failure mode. | 1706.05587#38 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 39 | (a) Image (b) G.T. (c) w/o bootstrapping (d) w/ bootstrapping Figure 7. Bootstrapping on hard images improves segmentation accuracy for rare and ï¬nely annotated classes such as bicycle.
in Tab. 8, DeepLabv3 attains the performance of 77.21% on the PASCAL VOC 2012 val set [20] when adopting the new training protocol setting as in the main paper. When training DeepLabv3 without ï¬ne-tuning the batch normal- ization, the performance drops to 75.95%. If we do not upsample the logits during training (and instead downsam- ple the groundtruths), the performance decreases to 76.01%. Furthermore, if we employ smaller value of crop size (i.e., 321 as in [10, 11]), the performance signiï¬cantly decreases to 67.22%, demonstrating that boundary effect resulted from small crop size hurts the performance of DeepLabv3 which employs large atrous rates in the Atrous Spatial Pyramid Pooling (ASPP) module. | 1706.05587#39 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 41 | Output stride: The value of output stride determines the output feature map resolution and in turn affects the largest batch size we could use during training. In Tab. 10, we quantitatively measure the effect of employing different output stride values during both training and evaluation on the PASCAL VOC 2012 val set. We ï¬rst ï¬x the evaluation output stride = 16, vary the training output stride and ï¬t the largest possible batch size for all the settings (we are able to ï¬t batch size 6, 16, and 24 for training output stride equal to 8, 16, and 32, respectively). As shown in the top rows of Tab. 10, employing training output stride = 8 only attains the performance of 74.45% because we could not ï¬t large batch size in this setting which degrades the performance while ï¬ne-tuning the batch normalization parameters. When employing training output stride = 32, we could ï¬t large batch size but we lose feature map details. On the other hand, employing training output stride = 16 strikes the best trade- off and leads to the best performance. In the bottom rows of Tab. 10, we increase the evaluation output stride = 8. All settings improve the performance except the one where training output stride = 32. We hypothesize that we lose too much feature map details during training, and thus the model could not recover the details even when employing | 1706.05587#41 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 42 | Crop Size UL BN | mlOU 513 v v | 77.21 513 v 75.95 513 v | 76.01 321 v | 67.22
Table 8. Effect of hyper-parameters during training on PASCAL VOC 2012 val set at output stride=16. UL: Upsampling Logits. BN: Fine-tuning batch normalization.
batch size mIOU 4 8 12 16 64.43 75.76 76.49 77.21
Table 9. Effect of batch size on PASCAL VOC 2012 val set. We em- ploy output stride=16 during both training and evaluation. Large batch size is required while training the model with ï¬ne-tuning the batch normalization parameters.
train output stride eval output stride mIOU 8 16 32 16 16 16 74.45 77.21 75.90 8 16 32 8 8 8 75.62 78.51 75.75
Table 10. Effect of output stride on PASCAL VOC 2012 val set. Employing output stride=16 during training leads to better perfor- mance for both eval output stride = 8 and 16.
output stride = 8 during evaluation.
# B. Asynchronous training | 1706.05587#42 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 43 | output stride = 8 during evaluation.
# B. Asynchronous training
In this section, we experiment DeepLabv3 with Tensor- Flow asynchronous training [1]. We measure the effect of training the model with multiple replicas on PASCAL VOC 2012 semantic segmentation dataset. Our baseline employs simply one replica and requires training time 3.65 days with a K80 GPU. As shown in Tab. 11, we found that the perfor- mance of using multiple replicas does not drop compared to the baseline. However, training time with 32 replicas is signiï¬cantly reduced to 2.74 hours.
# C. DeepLabv3 on Cityscapes dataset
Cityscapes [14] is a large-scale dataset containing high quality pixel-level annotations of 5000 images (2975, 500, and 1525 for the training, validation, and test sets respec- tively) and about 20000 coarsely annotated images. Follow- ing the evaluation protocol [14], 19 semantic labels are used for evaluation without considering the void label.
num replicas mIOU relative training time 1 2 4 8 16 32 77.21 77.15 76.79 77.02 77.18 76.69 1.00x 0.50x 0.25x 0.13x 0.06x 0.03x
Table 11. Evaluation performance on PASCAL VOC 2012 val set when adopting asynchronous training. | 1706.05587#43 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 44 | Table 11. Evaluation performance on PASCAL VOC 2012 val set when adopting asynchronous training.
OS=16 OS=8 MS_ Flip | mlIOU v 77.23 v 77.82 v v 79.06 v v v 79.30
Table 12. DeepLabv3 on the Cityscapes val set when trained with only train ï¬ne set. OS: output stride. MS: Multi-scale inputs during inference. Flip: Adding left-right ï¬ipped inputs.
We ï¬rst evaluate the proposed DeepLabv3 model on the validation set when training with only 2975 images (i.e., train ï¬ne set). We adopt the same training protocol as before except that we employ 90K training iterations, crop size equal to 769, and running inference on the whole image, instead of on the overlapped regions as in [11]. As shown in Tab. 12, DeepLabv3 attains the performance of 77.23% when evaluating at output stride = 16. Evaluating the model at output stride = 8 improves the performance to 77.82%. When we employ multi-scale inputs (we could ï¬t scales = {0.75, 1, 1.25} on a K40 GPU) and add left-right ï¬ipped inputs, the model achieves 79.30%. | 1706.05587#44 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 45 | In order to compete with other state-of-art models, we further train DeepLabv3 on the trainval coarse set (i.e., the 3475 ï¬nely annotated images and the extra 20000 coarsely annotated images). We adopt more scales and ï¬ner output stride during inference. In particular, we perform in- ference with scales = {0.75, 1, 1.25, 1.5, 1.75, 2} and eval- uation output stride = 4 with CPUs, which contributes extra 0.8% and 0.1% respectively on the validation set compared to using only three scales and output stride = 8. In the end, as shown in Tab. 13, our proposed DeepLabv3 achieves the performance of 81.3% on the test set. Some results on val set are visualized in Fig. 8.
# References
[1] M. Abadi, A. Agarwal, et al. Tensorï¬ow: Large-scale machine learning on heterogeneous distributed systems. arXiv:1603.04467, 2016.
[2] A. Adams, J. Baek, and M. A. Davis. Fast high-dimensional ï¬ltering using the permutohedral lattice. In Eurographics, 2010. | 1706.05587#45 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 46 | Method Coarse mIOU DeepLabv2-CRF [11] 70.4 Deep Layer Cascade [52] 71 ML-CRNN [21] 71.2 Adelaide_context [55] 71.6 FRRN [70] 71.8 LRR-4x [25 v 71.8 RefineNet [54] 73.6 FoveaNet [51] 74.1 Ladder DenseNet [46] 74.3 PEARL [42] 75.4 Global-Local-Refinement [93] 77.3 SAC-_multiple [94] 78.1 SegModel [75] v 79.2 TuSimple_Coarse [84] v 80.1 Netwarp [24 v 80.5 ResNet-38 [86] v 80.6 PSPNet [95] v 81.2 DeepLabv3 v 81.3
Table 13. Performance on Cityscapes test set. Coarse: Use train extra set (coarse annotations) as well. Only a few top models with known references are listed in this table.
[3] V. Badrinarayanan, A. Kendall, and R. Cipolla. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. arXiv:1511.00561, 2015. | 1706.05587#46 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 47 | [4] A. Brandt. Multi-level adaptive solutions to boundary-value problems. Mathematics of computation, 31(138):333â390, 1977.
[5] W. L. Briggs, V. E. Henson, and S. F. McCormick. A multigrid tutorial. SIAM, 2000.
[6] W. Byeon, T. M. Breuel, F. Raue, and M. Liwicki. Scene labeling with lstm recurrent neural networks. In CVPR, 2015. [7] H. Caesar, J. Uijlings, and V. Ferrari. COCO-Stuff: Thing and stuff classes in context. arXiv:1612.03716, 2016.
[8] S. Chandra and I. Kokkinos. Fast, exact and multi-scale in- ference for semantic image segmentation with deep Gaussian CRFs. arXiv:1603.08358, 2016.
[9] L.-C. Chen, J. T. Barron, G. Papandreou, K. Murphy, and A. L. Yuille. Semantic image segmentation with task-speciï¬c edge detection using cnns and a discriminatively trained domain transform. In CVPR, 2016. | 1706.05587#47 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 48 | [10] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. Semantic image segmentation with deep convolutional nets and fully connected crfs. In ICLR, 2015.
[11] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. arXiv:1606.00915, 2016.
[12] L.-C. Chen, Y. Yang, J. Wang, W. Xu, and A. L. Yuille. At- tention to scale: Scale-aware semantic image segmentation. In CVPR, 2016.
[13] F. Chollet. Xception: Deep learning with depthwise separable convolutions. arXiv:1610.02357, 2016. | 1706.05587#48 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 49 | Figure 8. Visualization results on Cityscapes val set when training with only train ï¬ne set.
[14] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele. The cityscapes dataset for semantic urban scene understanding. In CVPR, 2016.
[15] J. Dai, K. He, and J. Sun. Convolutional feature masking for joint object and stuff segmentation. arXiv:1412.1283, 2014. [16] J. Dai, K. He, and J. Sun. Boxsup: Exploiting bounding boxes to supervise convolutional networks for semantic segmenta- tion. In ICCV, 2015.
R-fcn: Object detection via region-based fully convolutional networks. arXiv:1605.06409, 2016.
[18] J. Dai, H. Qi, Y. Xiong, Y. Li, G. Zhang, H. Hu, and Y. Wei. arXiv:1703.06211, Deformable convolutional networks. 2017. | 1706.05587#49 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 50 | [19] D. Eigen and R. Fergus. Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. arXiv:1411.4734, 2014.
[20] M. Everingham, S. M. A. Eslami, L. V. Gool, C. K. I. Williams, J. Winn, and A. Zisserma. The pascal visual object classes challenge a retrospective. IJCV, 2014.
[21] H. Fan, X. Mei, D. Prokhorov, and H. Ling. Multi-level contextual rnns with attention model for scene labeling. arXiv:1607.02537, 2016.
[22] C. Farabet, C. Couprie, L. Najman, and Y. LeCun. Learning hierarchical features for scene labeling. PAMI, 2013. [23] J. Fu, J. Liu, Y. Wang, and H. Lu. Stacked deconvolutional network for semantic segmentation. arXiv:1708.04943, 2017. [24] R. Gadde, V. Jampani, and P. V. Gehler. Semantic video cnns
through representation warping. In ICCV, 2017. | 1706.05587#50 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 51 | through representation warping. In ICCV, 2017.
[25] G. Ghiasi and C. C. Fowlkes. Laplacian reconstruction and reï¬nement for semantic segmentation. arXiv:1605.02264, 2016.
[26] A. Giusti, D. Ciresan, J. Masci, L. Gambardella, and J. Schmidhuber. Fast image scanning with deep max-pooling convolutional neural networks. In ICIP, 2013.
[27] S. Gould, R. Fulton, and D. Koller. Decomposing a scene into geometric and semantically consistent regions. In ICCV. IEEE, 2009.
[28] K. Grauman and T. Darrell. The pyramid match kernel: Dis- criminative classiï¬cation with sets of image features. In ICCV, 2005.
[29] B. Hariharan, P. Arbel´aez, L. Bourdev, S. Maji, and J. Malik. Semantic contours from inverse detectors. In ICCV, 2011.
[30] B. Hariharan, P. Arbel´aez, R. Girshick, and J. Malik. Hyper- columns for object segmentation and ï¬ne-grained localization. In CVPR, 2015. | 1706.05587#51 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 52 | [31] K. He, X. Zhang, S. Ren, and J. Sun. Spatial pyramid pooling in deep convolutional networks for visual recognition. In ECCV, 2014.
[32] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. arXiv:1512.03385, 2015.
[33] X. He, R. S. Zemel, and M. Carreira-Perpindn. Multiscale conditional random ï¬elds for image labeling. In CVPR, 2004. [34] G. Hinton, O. Vinyals, and J. Dean. Distilling the knowledge
in a neural network. In NIPS, 2014.
[35] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735â1780, 1997. | 1706.05587#52 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 53 | [35] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735â1780, 1997.
[36] M. Holschneider, R. Kronland-Martinet, J. Morlet, and P. Tchamitchian. A real-time algorithm for signal analysis with the help of the wavelet transform. In Wavelets: Time- Frequency Methods and Phase Space, pages 289â297. 1989. [37] J. Huang, V. Rathod, C. Sun, M. Zhu, A. Korattikara, A. Fathi, I. Fischer, Z. Wojna, Y. Song, S. Guadarrama, and K. Murphy. Speed/accuracy trade-offs for modern convolutional object detectors. In CVPR, 2017.
[38] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv:1502.03167, 2015.
[39] M. A. Islam, M. Rochan, N. D. Bruce, and Y. Wang. Gated feedback reï¬nement network for dense image labeling. In CVPR, 2017. | 1706.05587#53 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 55 | [42] X. Jin, X. Li, H. Xiao, X. Shen, Z. Lin, J. Yang, Y. Chen, J. Dong, L. Liu, Z. Jie, J. Feng, and S. Yan. Video scene parsing with predictive feature learning. In ICCV, 2017. [43] P. Kohli, P. H. Torr, et al. Robust higher order potentials for enforcing label consistency. IJCV, 82(3):302â324, 2009. [44] S. Kong and C. Fowlkes. Recurrent scene parsing with per- spective understanding in the loop. arXiv:1705.07238, 2017. [45] P. Kr¨ahenb¨uhl and V. Koltun. Efï¬cient inference in fully connected crfs with gaussian edge potentials. In NIPS, 2011. [46] I. KreËso, S. ËSegvi´c, and J. Krapac. Ladder-style densenets for semantic segmentation of large natural images. In ICCV CVRSUAD workshop, 2017.
Imagenet classiï¬cation with deep convolutional neural networks. In NIPS, 2012. | 1706.05587#55 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 56 | Imagenet classiï¬cation with deep convolutional neural networks. In NIPS, 2012.
[48] L. Ladicky, C. Russell, P. Kohli, and P. H. Torr. Associative hierarchical crfs for object class image segmentation. In ICCV, 2009.
[49] S. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of fea- tures: Spatial pyramid matching for recognizing natural scene categories. In CVPR, 2006.
[50] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to handwritten zip code recognition. Neural computa- tion, 1(4):541â551, 1989.
[51] X. Li, Z. Jie, W. Wang, C. Liu, J. Yang, X. Shen, Z. Lin, Q. Chen, S. Yan, and J. Feng. Foveanet: Perspective-aware urban scene parsing. arXiv:1708.02421, 2017. | 1706.05587#56 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 57 | [52] X. Li, Z. Liu, P. Luo, C. C. Loy, and X. Tang. Not all pixels are equal: Difï¬culty-aware semantic segmentation via deep layer cascade. arXiv:1704.01344, 2017.
[53] X. Liang, X. Shen, D. Xiang, J. Feng, L. Lin, and S. Yan. Semantic object parsing with local-global long short-term memory. arXiv:1511.04510, 2015.
[54] G. Lin, A. Milan, C. Shen, and I. Reid. Reï¬nenet: Multi- path reï¬nement networks with identity mappings for high- resolution semantic segmentation. arXiv:1611.06612, 2016. [55] G. Lin, C. Shen, I. Reid, et al. Efï¬cient piecewise train- ing of deep structured models for semantic segmentation. arXiv:1504.01013, 2015.
[56] T.-Y. Lin, P. Doll´ar, R. Girshick, K. He, B. Hariharan, and S. Belongie. Feature pyramid networks for object detection. arXiv:1612.03144, 2016. | 1706.05587#57 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 58 | [57] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ra- manan, P. Doll´ar, and C. L. Zitnick. Microsoft COCO: Com- mon objects in context. In ECCV, 2014.
[58] W. Liu, A. Rabinovich, and A. C. Berg. Parsenet: Looking wider to see better. arXiv:1506.04579, 2015.
[59] Z. Liu, X. Li, P. Luo, C. C. Loy, and X. Tang. Semantic image segmentation via deep parsing network. In ICCV, 2015. [60] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015. [61] P. Luo, G. Wang, L. Lin, and X. Wang. Deep dual learning
for semantic image segmentation. In ICCV, 2017.
[62] M. Mostajabi, P. Yadollahpour, and G. Shakhnarovich. Feed- forward semantic segmentation with zoom-out features. In CVPR, 2015. | 1706.05587#58 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 59 | [63] R. Mottaghi, X. Chen, X. Liu, N.-G. Cho, S.-W. Lee, S. Fidler, R. Urtasun, and A. Yuille. The role of context for object detection and semantic segmentation in the wild. In CVPR, 2014.
[64] H. Noh, S. Hong, and B. Han. Learning deconvolution net- work for semantic segmentation. In ICCV, 2015.
[65] G. Papandreou, L.-C. Chen, K. Murphy, and A. L. Yuille. Weakly- and semi-supervised learning of a dcnn for semantic image segmentation. In ICCV, 2015.
[66] G. Papandreou, I. Kokkinos, and P.-A. Savalle. Modeling local and global deformations in deep learning: Epitomic convolution, multiple instance learning, and sliding window detection. In CVPR, 2015.
[67] G. Papandreou and P. Maragos. Multigrid geometric active contour models. TIP, 16(1):229â240, 2007. | 1706.05587#59 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 60 | [67] G. Papandreou and P. Maragos. Multigrid geometric active contour models. TIP, 16(1):229â240, 2007.
[68] C. Peng, X. Zhang, G. Yu, G. Luo, and J. Sun. Large kernel mattersâimprove semantic segmentation by global convolu- tional network. arXiv:1703.02719, 2017.
[69] P. Pinheiro and R. Collobert. Recurrent convolutional neural networks for scene labeling. In ICML, 2014.
[70] T. Pohlen, A. Hermans, M. Mathias, and B. Leibe. Full- resolution residual networks for semantic segmentation in street scenes. arXiv:1611.08323, 2016.
[71] O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. In MICCAI, 2015.
[72] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. IJCV, 2015. | 1706.05587#60 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 61 | [73] A. G. Schwing and R. Urtasun. Fully connected deep struc- tured networks. arXiv:1503.02351, 2015.
[74] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun. Overfeat: Integrated recognition, localization and
detection using convolutional networks. arXiv:1312.6229, 2013.
[75] F. Shen, R. Gan, S. Yan, and G. Zeng. Semantic segmentation via structured patch prediction, context crf and guidance crf. In CVPR, 2017.
[76] J. Shotton, J. Winn, C. Rother, and A. Criminisi. Textonboost for image understanding: Multi-class object recognition and segmentation by jointly modeling texture, layout, and context. IJCV, 2009.
[77] A. Shrivastava, R. Sukthankar, J. Malik, and A. Gupta. Be- yond skip connections: Top-down modulation for object de- tection. arXiv:1612.06851, 2016. | 1706.05587#61 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 62 | [78] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015. [79] C. Sun, A. Shrivastava, S. Singh, and A. Gupta. Revisiting unreasonable effectiveness of data in deep learning era. In ICCV, 2017.
[80] H. Sun, D. Xie, and S. Pu. Mixed context networks for semantic segmentation. arXiv:1610.05854, 2016.
[81] D. Terzopoulos. Image analysis using multigrid relaxation methods. TPAMI, (2):129â139, 1986.
[82] R. Vemulapalli, O. Tuzel, M.-Y. Liu, and R. Chellappa. Gaus- sian conditional random ï¬eld network for semantic segmenta- tion. In CVPR, 2016.
[83] G. Wang, P. Luo, L. Lin, and X. Wang. Learning object inter- actions and descriptions for semantic image segmentation. In CVPR, 2017.
[84] P. Wang, P. Chen, Y. Yuan, D. Liu, Z. Huang, X. Hou, and G. Cottrell. Understanding convolution for semantic segmen- tation. arXiv:1702.08502, 2017. | 1706.05587#62 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 63 | Bridging category-level and instance-level semantic image segmen- tation. arXiv:1605.06885, 2016.
[86] Z. Wu, C. Shen, and A. van den Hengel. Wider or deeper: Revisiting the resnet model for visual recognition. arXiv:1611.10080, 2016.
[87] F. Xia, P. Wang, L.-C. Chen, and A. L. Yuille. Zoom better to see clearer: Huamn part segmentation with auto zoom net. arXiv:1511.06881, 2015.
[88] Z. Yan, H. Zhang, Y. Jia, T. Breuel, and Y. Yu. Combining the best of convolutional layers and recurrent layers: A hybrid network for semantic segmentation. arXiv:1603.04871, 2016. [89] J. Yao, S. Fidler, and R. Urtasun. Describing the scene as a whole: Joint object detection, scene classiï¬cation and seman- tic segmentation. In CVPR, 2012.
[90] F. Yu and V. Koltun. Multi-scale context aggregation by dilated convolutions. In ICLR, 2016. | 1706.05587#63 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05587 | 64 | [90] F. Yu and V. Koltun. Multi-scale context aggregation by dilated convolutions. In ICLR, 2016.
[91] S. Zagoruyko and N. Komodakis. Wide residual networks. arXiv:1605.07146, 2016.
[92] M. D. Zeiler, G. W. Taylor, and R. Fergus. Adaptive deconvo- lutional networks for mid and high level feature learning. In ICCV, 2011.
[93] R. Zhang, S. Tang, M. Lin, J. Li, and S. Yan. Global-residual and local-boundary reï¬nement networks for rectifying scene parsing predictions. IJCAI, 2017.
[94] R. Zhang, S. Tang, Y. Zhang, J. Li, and S. Yan. Scale-adaptive convolutions for scene parsing. In ICCV, 2017.
[95] H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia. Pyramid scene parsing network. arXiv:1612.01105, 2016. | 1706.05587#64 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | [
{
"id": "1605.07146"
},
{
"id": "1511.00561"
},
{
"id": "1704.01344"
},
{
"id": "1708.04943"
},
{
"id": "1605.02264"
},
{
"id": "1702.08502"
},
{
"id": "1612.01105"
},
{
"id": "1708.02421"
},
{
"id": "1612.03716"
},
{
"id": "1502.03167"
},
{
"id": "1703.02719"
},
{
"id": "1611.08323"
},
{
"id": "1511.04510"
},
{
"id": "1506.04579"
},
{
"id": "1603.04467"
},
{
"id": "1605.06409"
},
{
"id": "1512.03385"
},
{
"id": "1705.07238"
},
{
"id": "1606.00915"
},
{
"id": "1503.02351"
},
{
"id": "1611.10080"
},
{
"id": "1603.08358"
},
{
"id": "1610.05854"
},
{
"id": "1607.02537"
},
{
"id": "1603.04871"
},
{
"id": "1610.02357"
},
{
"id": "1504.01013"
},
{
"id": "1611.06612"
},
{
"id": "1612.06851"
},
{
"id": "1511.06881"
},
{
"id": "1605.06885"
},
{
"id": "1612.03144"
},
{
"id": "1703.06211"
}
] |
1706.05125 | 1 | {mikelewis,denisy,ynd}@fb.com {parikh,dbatra}@gatech.edu
# Abstract
Much of human dialogue occurs in semi- cooperative settings, where agents with different goals attempt to agree on com- mon decisions. Negotiations require com- plex communication and reasoning skills, but success is easy to measure, making this an interesting task for AI. We gather a large dataset of human-human negoti- ations on a multi-issue bargaining task, where agents who cannot observe each otherâs reward functions must reach an agreement (or a deal) via natural language dialogue. For the ï¬rst time, we show it is possible to train end-to-end models for ne- gotiation, which must learn both linguistic and reasoning skills with no annotated di- alogue states. We also introduce dialogue rollouts, in which the model plans ahead by simulating possible complete continu- ations of the conversation, and ï¬nd that this technique dramatically improves per- formance. Our code and dataset are pub- licly available.1 | 1706.05125#1 | Deal or No Deal? End-to-End Learning for Negotiation Dialogues | Much of human dialogue occurs in semi-cooperative settings, where agents with
different goals attempt to agree on common decisions. Negotiations require
complex communication and reasoning skills, but success is easy to measure,
making this an interesting task for AI. We gather a large dataset of
human-human negotiations on a multi-issue bargaining task, where agents who
cannot observe each other's reward functions must reach an agreement (or a
deal) via natural language dialogue. For the first time, we show it is possible
to train end-to-end models for negotiation, which must learn both linguistic
and reasoning skills with no annotated dialogue states. We also introduce
dialogue rollouts, in which the model plans ahead by simulating possible
complete continuations of the conversation, and find that this technique
dramatically improves performance. Our code and dataset are publicly available
(https://github.com/facebookresearch/end-to-end-negotiator). | http://arxiv.org/pdf/1706.05125 | Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, Dhruv Batra | cs.AI, cs.CL | null | null | cs.AI | 20170616 | 20170616 | [
{
"id": "1606.03152"
},
{
"id": "1510.03055"
},
{
"id": "1703.04908"
},
{
"id": "1511.08099"
},
{
"id": "1604.04562"
},
{
"id": "1611.08669"
},
{
"id": "1703.06585"
},
{
"id": "1606.01541"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1706.05125 | 2 | that end-to-end neural models can be trained to negotiate by maximizing the likelihood of human actions. This approach is scalable and domain- independent, but does not model the strategic skills required for negotiating well. We fur- ther show that models can be improved by train- ing and decoding to maximize reward instead of likelihoodâby training with self-play reinforce- ment learning, and using rollouts to estimate the expected reward of utterances during decoding.
To study semi-cooperative dialogue, we gather a dataset of 5808 dialogues between humans on a negotiation task. Users were shown a set of items with a value for each, and asked to agree how to divide the items with another user who has a dif- ferent, unseen, value function (Figure 1).
We ï¬rst train recurrent neural networks to imi- tate human actions. We ï¬nd that models trained to maximise the likelihood of human utterances can generate ï¬uent language, but make comparatively poor negotiators, which are overly willing to com- promise. We therefore explore two methods for improving the modelâs strategic reasoning skillsâ both of which attempt to optimise for the agentâs goals, rather than simply imitating humans:
# 1 Introduction | 1706.05125#2 | Deal or No Deal? End-to-End Learning for Negotiation Dialogues | Much of human dialogue occurs in semi-cooperative settings, where agents with
different goals attempt to agree on common decisions. Negotiations require
complex communication and reasoning skills, but success is easy to measure,
making this an interesting task for AI. We gather a large dataset of
human-human negotiations on a multi-issue bargaining task, where agents who
cannot observe each other's reward functions must reach an agreement (or a
deal) via natural language dialogue. For the first time, we show it is possible
to train end-to-end models for negotiation, which must learn both linguistic
and reasoning skills with no annotated dialogue states. We also introduce
dialogue rollouts, in which the model plans ahead by simulating possible
complete continuations of the conversation, and find that this technique
dramatically improves performance. Our code and dataset are publicly available
(https://github.com/facebookresearch/end-to-end-negotiator). | http://arxiv.org/pdf/1706.05125 | Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, Dhruv Batra | cs.AI, cs.CL | null | null | cs.AI | 20170616 | 20170616 | [
{
"id": "1606.03152"
},
{
"id": "1510.03055"
},
{
"id": "1703.04908"
},
{
"id": "1511.08099"
},
{
"id": "1604.04562"
},
{
"id": "1611.08669"
},
{
"id": "1703.06585"
},
{
"id": "1606.01541"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1706.05125 | 3 | # 1 Introduction
Intelligent agents often need to cooperate with oth- ers who have different goals, and typically use natural language to agree on decisions. Negotia- tion is simultaneously a linguistic and a reasoning problem, in which an intent must be formulated and then verbally realised. Such dialogues contain both cooperative and adversarial elements, and re- quire agents to understand, plan, and generate ut- terances to achieve their goals (Traum et al., 2008; Asher et al., 2012).
Firstly, instead of training to optimise likeli- hood, we show that our agents can be consider- ably improved using self play, in which pre-trained models practice negotiating with each other in or- der to optimise performance. To avoid the models diverging from human language, we interleave re- inforcement learning updates with supervised up- dates. For the ï¬rst time, we show that end-to- end dialogue agents trained using reinforcement learning outperform their supervised counterparts in negotiations with humans.
Secondly, we introduce a new form of planning for dialogue called dialogue rollouts, in which an agent simulates complete dialogues during decod- ing to estimate the reward of utterances. We show
We collect the ï¬rst large dataset of natural lan- guage negotiations between two people, and show | 1706.05125#3 | Deal or No Deal? End-to-End Learning for Negotiation Dialogues | Much of human dialogue occurs in semi-cooperative settings, where agents with
different goals attempt to agree on common decisions. Negotiations require
complex communication and reasoning skills, but success is easy to measure,
making this an interesting task for AI. We gather a large dataset of
human-human negotiations on a multi-issue bargaining task, where agents who
cannot observe each other's reward functions must reach an agreement (or a
deal) via natural language dialogue. For the first time, we show it is possible
to train end-to-end models for negotiation, which must learn both linguistic
and reasoning skills with no annotated dialogue states. We also introduce
dialogue rollouts, in which the model plans ahead by simulating possible
complete continuations of the conversation, and find that this technique
dramatically improves performance. Our code and dataset are publicly available
(https://github.com/facebookresearch/end-to-end-negotiator). | http://arxiv.org/pdf/1706.05125 | Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, Dhruv Batra | cs.AI, cs.CL | null | null | cs.AI | 20170616 | 20170616 | [
{
"id": "1606.03152"
},
{
"id": "1510.03055"
},
{
"id": "1703.04908"
},
{
"id": "1511.08099"
},
{
"id": "1604.04562"
},
{
"id": "1611.08669"
},
{
"id": "1703.06585"
},
{
"id": "1606.01541"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1706.05125 | 4 | # 1https://github.com/facebookresearch/end-to-end-negotiator
Figure 1: A dialogue in our Mechanical Turk interface, which we used to collect a negotiation dataset.
that decoding to maximise the reward function (rather than likelihood) signiï¬cantly improves per- formance against both humans and machines.
Analysing the performance of our agents, we ï¬nd evidence of sophisticated negotiation strate- gies. For example, we ï¬nd instances of the model feigning interest in a valueless issue, so that it can later âcompromiseâ by conceding it. Deceit is a complex skill that requires hypothesising the other agentâs beliefs, and is learnt relatively late in child development (Talwar and Lee, 2002). Our agents have learnt to deceive without any explicit human design, simply by trying to achieve their goals.
The rest of the paper proceeds as follows: §2 de- scribes the collection of a large dataset of human- human negotiation dialogues. §3 describes a base- line supervised model, which we then show can be improved by goal-based training (§4) and de- coding (§5). §6 measures the performance of our models and humans on this task, and §7 gives a detailed analysis and suggests future directions.
# 2 Data Collection
# 2.1 Overview | 1706.05125#4 | Deal or No Deal? End-to-End Learning for Negotiation Dialogues | Much of human dialogue occurs in semi-cooperative settings, where agents with
different goals attempt to agree on common decisions. Negotiations require
complex communication and reasoning skills, but success is easy to measure,
making this an interesting task for AI. We gather a large dataset of
human-human negotiations on a multi-issue bargaining task, where agents who
cannot observe each other's reward functions must reach an agreement (or a
deal) via natural language dialogue. For the first time, we show it is possible
to train end-to-end models for negotiation, which must learn both linguistic
and reasoning skills with no annotated dialogue states. We also introduce
dialogue rollouts, in which the model plans ahead by simulating possible
complete continuations of the conversation, and find that this technique
dramatically improves performance. Our code and dataset are publicly available
(https://github.com/facebookresearch/end-to-end-negotiator). | http://arxiv.org/pdf/1706.05125 | Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, Dhruv Batra | cs.AI, cs.CL | null | null | cs.AI | 20170616 | 20170616 | [
{
"id": "1606.03152"
},
{
"id": "1510.03055"
},
{
"id": "1703.04908"
},
{
"id": "1511.08099"
},
{
"id": "1604.04562"
},
{
"id": "1611.08669"
},
{
"id": "1703.06585"
},
{
"id": "1606.01541"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1706.05125 | 5 | # 2 Data Collection
# 2.1 Overview
agreement has been made, both agents indepen- dently output what they think the agreed decision was. If conï¬icting decisions are made, both agents are given zero reward.
# 2.2 Task
Our task is an instance of multi issue bargaining (Fershtman, 1990), and is based on DeVault et al. (2015). Two agents are both shown the same col- lection of items, and instructed to divide them so that each item assigned to one agent.
Each agent is given a different randomly gen- erated value function, which gives a non-negative value for each item. The value functions are con- strained so that: (1) the total value for a user of all items is 10; (2) each item has non-zero value to at least one user; and (3) some items have non- zero value to both users. These constraints enforce that it is not possible for both agents to receive a maximum score, and that no item is worthless to both agents, so the negotiation will be competitive. After 10 turns, we allow agents the option to com- plete the negotiation with no agreement, which is worth 0 points to both users. We use 3 item types (books, hats, balls), and between 5 and 7 total items in the pool. Figure 1 shows our interface. | 1706.05125#5 | Deal or No Deal? End-to-End Learning for Negotiation Dialogues | Much of human dialogue occurs in semi-cooperative settings, where agents with
different goals attempt to agree on common decisions. Negotiations require
complex communication and reasoning skills, but success is easy to measure,
making this an interesting task for AI. We gather a large dataset of
human-human negotiations on a multi-issue bargaining task, where agents who
cannot observe each other's reward functions must reach an agreement (or a
deal) via natural language dialogue. For the first time, we show it is possible
to train end-to-end models for negotiation, which must learn both linguistic
and reasoning skills with no annotated dialogue states. We also introduce
dialogue rollouts, in which the model plans ahead by simulating possible
complete continuations of the conversation, and find that this technique
dramatically improves performance. Our code and dataset are publicly available
(https://github.com/facebookresearch/end-to-end-negotiator). | http://arxiv.org/pdf/1706.05125 | Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, Dhruv Batra | cs.AI, cs.CL | null | null | cs.AI | 20170616 | 20170616 | [
{
"id": "1606.03152"
},
{
"id": "1510.03055"
},
{
"id": "1703.04908"
},
{
"id": "1511.08099"
},
{
"id": "1604.04562"
},
{
"id": "1611.08669"
},
{
"id": "1703.06585"
},
{
"id": "1606.01541"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1706.05125 | 6 | To enable end-to-end training of negotiation agents, we ï¬rst develop a novel negotiation task and curate a dataset of human-human dialogues for this task. This task and dataset follow our proposed general framework for studying semi- is cooperative dialogue. shown an input specifying a space of possible ac- tions and a reward function which will score the outcome of the negotiation. Agents then sequen- tially take turns of either sending natural language messages, or selecting that a ï¬nal decision has been reached. When one agent selects that an
# 2.3 Data Collection
We collected a set of human-human dialogues us- ing Amazon Mechanical Turk. Workers were paid $0.15 per dialogue, with a $0.05 bonus for max- imal scores. We only used workers based in the United States with a 95% approval rating and at least 5000 previous HITs. Our data collection in- terface was adapted from that of Das et al. (2016). We collected a total of 5808 dialogues, based on 2236 unique scenarios (where a scenario is the | 1706.05125#6 | Deal or No Deal? End-to-End Learning for Negotiation Dialogues | Much of human dialogue occurs in semi-cooperative settings, where agents with
different goals attempt to agree on common decisions. Negotiations require
complex communication and reasoning skills, but success is easy to measure,
making this an interesting task for AI. We gather a large dataset of
human-human negotiations on a multi-issue bargaining task, where agents who
cannot observe each other's reward functions must reach an agreement (or a
deal) via natural language dialogue. For the first time, we show it is possible
to train end-to-end models for negotiation, which must learn both linguistic
and reasoning skills with no annotated dialogue states. We also introduce
dialogue rollouts, in which the model plans ahead by simulating possible
complete continuations of the conversation, and find that this technique
dramatically improves performance. Our code and dataset are publicly available
(https://github.com/facebookresearch/end-to-end-negotiator). | http://arxiv.org/pdf/1706.05125 | Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, Dhruv Batra | cs.AI, cs.CL | null | null | cs.AI | 20170616 | 20170616 | [
{
"id": "1606.03152"
},
{
"id": "1510.03055"
},
{
"id": "1703.04908"
},
{
"id": "1511.08099"
},
{
"id": "1604.04562"
},
{
"id": "1611.08669"
},
{
"id": "1703.06585"
},
{
"id": "1606.01541"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1706.05125 | 8 | Perspective: Agent 1 Input 3xbook value=1 value=3 2xhat value=1 1xball Output 2xbook 2xhat Dialogue write: I want the books and the hats, you get the ball read: Give me a book too and we have a deal write: Ok, deal read: <choose>
Perspective: Agent 2 Input 3xbook value=2 value=1 2xhat value=2 1xball Output 1xbook 1xball Dialogue read: I want the books and the hats, you get the ball write: Give me a book too and we have a deal read: Ok, deal write: <choose> | 1706.05125#8 | Deal or No Deal? End-to-End Learning for Negotiation Dialogues | Much of human dialogue occurs in semi-cooperative settings, where agents with
different goals attempt to agree on common decisions. Negotiations require
complex communication and reasoning skills, but success is easy to measure,
making this an interesting task for AI. We gather a large dataset of
human-human negotiations on a multi-issue bargaining task, where agents who
cannot observe each other's reward functions must reach an agreement (or a
deal) via natural language dialogue. For the first time, we show it is possible
to train end-to-end models for negotiation, which must learn both linguistic
and reasoning skills with no annotated dialogue states. We also introduce
dialogue rollouts, in which the model plans ahead by simulating possible
complete continuations of the conversation, and find that this technique
dramatically improves performance. Our code and dataset are publicly available
(https://github.com/facebookresearch/end-to-end-negotiator). | http://arxiv.org/pdf/1706.05125 | Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, Dhruv Batra | cs.AI, cs.CL | null | null | cs.AI | 20170616 | 20170616 | [
{
"id": "1606.03152"
},
{
"id": "1510.03055"
},
{
"id": "1703.04908"
},
{
"id": "1511.08099"
},
{
"id": "1604.04562"
},
{
"id": "1611.08669"
},
{
"id": "1703.06585"
},
{
"id": "1606.01541"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1706.05125 | 9 | Figure 2: Converting a crowd-sourced dialogue (left) into two training examples (right), from the per- spective of each user. The perspectives differ on their input goals, output choice, and in special tokens marking whether a statement was read or written. We train conditional language models to predict the dialogue given the input, and additional models to predict the output given the dialogue.
available items and values for the two users). We held out a test set of 252 scenarios (526 dialogues). Holding out test scenarios means that models must generalise to new situations.
ment has been made. Output o is six integers de- scribing how many of each of the three item types are assigned to each agent. See Figure 2.
# 3.2 Supervised Learning
# 3 Likelihood Model
We propose a simple but effective baseline model for the conversational agent, in which a sequence- to-sequence model is trained to produce the com- plete dialogue, conditioned on an agentâs input.
We train a sequence-to-sequence network to gen- erate an agentâs perspective of the dialogue condi- tioned on the agentâs input goals (Figure 3a).
The model uses 4 recurrent neural networks, implemented as GRUs (Cho et al., 2014): GRUw, GRUg, GRUââo , and GRUââo .
# 3.1 Data Representation | 1706.05125#9 | Deal or No Deal? End-to-End Learning for Negotiation Dialogues | Much of human dialogue occurs in semi-cooperative settings, where agents with
different goals attempt to agree on common decisions. Negotiations require
complex communication and reasoning skills, but success is easy to measure,
making this an interesting task for AI. We gather a large dataset of
human-human negotiations on a multi-issue bargaining task, where agents who
cannot observe each other's reward functions must reach an agreement (or a
deal) via natural language dialogue. For the first time, we show it is possible
to train end-to-end models for negotiation, which must learn both linguistic
and reasoning skills with no annotated dialogue states. We also introduce
dialogue rollouts, in which the model plans ahead by simulating possible
complete continuations of the conversation, and find that this technique
dramatically improves performance. Our code and dataset are publicly available
(https://github.com/facebookresearch/end-to-end-negotiator). | http://arxiv.org/pdf/1706.05125 | Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, Dhruv Batra | cs.AI, cs.CL | null | null | cs.AI | 20170616 | 20170616 | [
{
"id": "1606.03152"
},
{
"id": "1510.03055"
},
{
"id": "1703.04908"
},
{
"id": "1511.08099"
},
{
"id": "1604.04562"
},
{
"id": "1611.08669"
},
{
"id": "1703.06585"
},
{
"id": "1606.01541"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1706.05125 | 10 | # 3.1 Data Representation
Each dialogue is converted into two training ex- amples, showing the complete conversation from the perspective of each agent. The examples differ on their input goals, output choice, and whether utterances were read or written.
Training examples contain an input goal g, specifying the available items and their values, a dialogue x, and an output decision o specifying which items each agent will receive. Speciï¬cally, we represent g as a list of six integers correspond- ing to the count and value of each of the three item types. Dialogue x is a list of tokens x0..T contain- ing the turns of each agent interleaved with sym- bols marking whether a turn was written by the agent or their partner, terminating in a special to- ken indicating one agent has marked that an agreeThe agentâs input goals g are encoded using GRUg. We refer to the ï¬nal hidden state as hg. The model then predicts each token xt from left to right, conditioned on the previous tokens and hg. At each time step t, GRUw takes as input the pre- vious hidden state htâ1, previous token xtâ1 (em- bedded with a matrix E), and input encoding hg. Conditioning on the input at each time step helps the model learn dependencies between language and goals. | 1706.05125#10 | Deal or No Deal? End-to-End Learning for Negotiation Dialogues | Much of human dialogue occurs in semi-cooperative settings, where agents with
different goals attempt to agree on common decisions. Negotiations require
complex communication and reasoning skills, but success is easy to measure,
making this an interesting task for AI. We gather a large dataset of
human-human negotiations on a multi-issue bargaining task, where agents who
cannot observe each other's reward functions must reach an agreement (or a
deal) via natural language dialogue. For the first time, we show it is possible
to train end-to-end models for negotiation, which must learn both linguistic
and reasoning skills with no annotated dialogue states. We also introduce
dialogue rollouts, in which the model plans ahead by simulating possible
complete continuations of the conversation, and find that this technique
dramatically improves performance. Our code and dataset are publicly available
(https://github.com/facebookresearch/end-to-end-negotiator). | http://arxiv.org/pdf/1706.05125 | Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, Dhruv Batra | cs.AI, cs.CL | null | null | cs.AI | 20170616 | 20170616 | [
{
"id": "1606.03152"
},
{
"id": "1510.03055"
},
{
"id": "1703.04908"
},
{
"id": "1511.08099"
},
{
"id": "1604.04562"
},
{
"id": "1611.08669"
},
{
"id": "1703.06585"
},
{
"id": "1606.01541"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1706.05125 | 11 | ht = GRUw(htâ1, [Extâ1, hg]) (1)
The token at each time step is predicted with a softmax, which uses weight tying with the embed- ding matrix E (Mao et al., 2015):
pθ(xt|x0..tâ1, g) â exp(ET ht) (2)
Input Encoder Output Decoder Input Encoder Output Decoder write: Take one hat read: I need two write: deal . . . write: Take one hat read: I need two write: deal (a) Supervised Training (b) Decoding, and Reinforcement Learning . . .
Figure 3: Our model: tokens are predicted conditioned on previous words and the input, then the output is predicted using attention over the complete dialogue. In supervised training (3a), we train the model to predict the tokens of both agents. During decoding and reinforcement learning (3b) some tokens are sampled from the model, but some are generated by the other agent and are only encoded by the model.
Note that the model predicts both agentâs words, enabling its use as a forward model in Section 5. | 1706.05125#11 | Deal or No Deal? End-to-End Learning for Negotiation Dialogues | Much of human dialogue occurs in semi-cooperative settings, where agents with
different goals attempt to agree on common decisions. Negotiations require
complex communication and reasoning skills, but success is easy to measure,
making this an interesting task for AI. We gather a large dataset of
human-human negotiations on a multi-issue bargaining task, where agents who
cannot observe each other's reward functions must reach an agreement (or a
deal) via natural language dialogue. For the first time, we show it is possible
to train end-to-end models for negotiation, which must learn both linguistic
and reasoning skills with no annotated dialogue states. We also introduce
dialogue rollouts, in which the model plans ahead by simulating possible
complete continuations of the conversation, and find that this technique
dramatically improves performance. Our code and dataset are publicly available
(https://github.com/facebookresearch/end-to-end-negotiator). | http://arxiv.org/pdf/1706.05125 | Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, Dhruv Batra | cs.AI, cs.CL | null | null | cs.AI | 20170616 | 20170616 | [
{
"id": "1606.03152"
},
{
"id": "1510.03055"
},
{
"id": "1703.04908"
},
{
"id": "1511.08099"
},
{
"id": "1604.04562"
},
{
"id": "1611.08669"
},
{
"id": "1703.06585"
},
{
"id": "1606.01541"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1706.05125 | 12 | Note that the model predicts both agentâs words, enabling its use as a forward model in Section 5.
At the end of the dialogue, the agent outputs a set of tokens o representing the decision. We generate each output conditionally independently, using a separate classiï¬er for each. The classi- ï¬ers share bidirectional GRUo and attention mech- anism (Bahdanau et al., 2014) over the dialogue, and additionally conditions on the input goals.
ââo t = GRUââo (h h ââo t = GRUââo (h h ââo ââo ho t = [h t , h t ] t = W [tanh(W â²ho ha exp(w · ha t ) Ptâ² exp(w · ha tâ² ) hs = tanh(W s[hg, X t
ââo tâ1, [Ext, ht]) ââo t+1, [Ext, ht])
(3)
(4)
(5)
t )] (6)
αt = (7) | 1706.05125#12 | Deal or No Deal? End-to-End Learning for Negotiation Dialogues | Much of human dialogue occurs in semi-cooperative settings, where agents with
different goals attempt to agree on common decisions. Negotiations require
complex communication and reasoning skills, but success is easy to measure,
making this an interesting task for AI. We gather a large dataset of
human-human negotiations on a multi-issue bargaining task, where agents who
cannot observe each other's reward functions must reach an agreement (or a
deal) via natural language dialogue. For the first time, we show it is possible
to train end-to-end models for negotiation, which must learn both linguistic
and reasoning skills with no annotated dialogue states. We also introduce
dialogue rollouts, in which the model plans ahead by simulating possible
complete continuations of the conversation, and find that this technique
dramatically improves performance. Our code and dataset are publicly available
(https://github.com/facebookresearch/end-to-end-negotiator). | http://arxiv.org/pdf/1706.05125 | Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, Dhruv Batra | cs.AI, cs.CL | null | null | cs.AI | 20170616 | 20170616 | [
{
"id": "1606.03152"
},
{
"id": "1510.03055"
},
{
"id": "1703.04908"
},
{
"id": "1511.08099"
},
{
"id": "1604.04562"
},
{
"id": "1611.08669"
},
{
"id": "1703.06585"
},
{
"id": "1606.01541"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1706.05125 | 13 | αtht]) (8)
The output tokens are predicted using softmax:
# 3.3 Decoding
During decoding, the model must generate an output token xt conditioned on dialogue history x0..tâ1 and input goals g, by sampling from pθ:
xt â¼ pθ(xt|x0..tâ1, g) (11)
If the model generates a special end-of-turn to- ken, it then encodes a series of tokens output by the other agent, until its next turn (Figure 3b).
The dialogue ends when either agent outputs a special end-of-dialogue token. The model then outputs a set of choices o. We choose each item independently, but enforce consistency by check- ing the solution is in a feasible set O:
oâ = argmax oâO Y i pθ(oi|x0..T , g) (12)
In our task, a solution is feasible if each item is as- signed to exactly one agent. The space of solutions is small enough to be tractably enumerated.
pθ(oi|x0..t, g) â exp(W oihs) (9)
# 4 Goal-based Training | 1706.05125#13 | Deal or No Deal? End-to-End Learning for Negotiation Dialogues | Much of human dialogue occurs in semi-cooperative settings, where agents with
different goals attempt to agree on common decisions. Negotiations require
complex communication and reasoning skills, but success is easy to measure,
making this an interesting task for AI. We gather a large dataset of
human-human negotiations on a multi-issue bargaining task, where agents who
cannot observe each other's reward functions must reach an agreement (or a
deal) via natural language dialogue. For the first time, we show it is possible
to train end-to-end models for negotiation, which must learn both linguistic
and reasoning skills with no annotated dialogue states. We also introduce
dialogue rollouts, in which the model plans ahead by simulating possible
complete continuations of the conversation, and find that this technique
dramatically improves performance. Our code and dataset are publicly available
(https://github.com/facebookresearch/end-to-end-negotiator). | http://arxiv.org/pdf/1706.05125 | Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, Dhruv Batra | cs.AI, cs.CL | null | null | cs.AI | 20170616 | 20170616 | [
{
"id": "1606.03152"
},
{
"id": "1510.03055"
},
{
"id": "1703.04908"
},
{
"id": "1511.08099"
},
{
"id": "1604.04562"
},
{
"id": "1611.08669"
},
{
"id": "1703.06585"
},
{
"id": "1606.01541"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1706.05125 | 14 | pθ(oi|x0..t, g) â exp(W oihs) (9)
# 4 Goal-based Training
The model is trained to minimize the negative log likelihood of the token sequence x0..T con- ditioned on the input goals g, and of the outputs o conditioned on x and g. The two terms are weighted with a hyperparameter α.
L(θ) = â X x,g X t log pθ(xt|x0..tâ1, g)
Supervised learning aims to imitate the actions of human users, but does not explicitly attempt to Instead, we explore maximise an agentâs goals. pre-training with supervised learning, and then ï¬ne-tuning against the evaluation metric using re- inforcement learning. Similar two-stage learning strategies have been used previously (e.g. Li et al. (2016); Das et al. (2017)). | 1706.05125#14 | Deal or No Deal? End-to-End Learning for Negotiation Dialogues | Much of human dialogue occurs in semi-cooperative settings, where agents with
different goals attempt to agree on common decisions. Negotiations require
complex communication and reasoning skills, but success is easy to measure,
making this an interesting task for AI. We gather a large dataset of
human-human negotiations on a multi-issue bargaining task, where agents who
cannot observe each other's reward functions must reach an agreement (or a
deal) via natural language dialogue. For the first time, we show it is possible
to train end-to-end models for negotiation, which must learn both linguistic
and reasoning skills with no annotated dialogue states. We also introduce
dialogue rollouts, in which the model plans ahead by simulating possible
complete continuations of the conversation, and find that this technique
dramatically improves performance. Our code and dataset are publicly available
(https://github.com/facebookresearch/end-to-end-negotiator). | http://arxiv.org/pdf/1706.05125 | Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, Dhruv Batra | cs.AI, cs.CL | null | null | cs.AI | 20170616 | 20170616 | [
{
"id": "1606.03152"
},
{
"id": "1510.03055"
},
{
"id": "1703.04908"
},
{
"id": "1511.08099"
},
{
"id": "1604.04562"
},
{
"id": "1611.08669"
},
{
"id": "1703.06585"
},
{
"id": "1606.01541"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1706.05125 | 16 | |
# Output choice prediction loss {z
}
the Neural Conversational Model (Vinyals and Le, 2015), our approach shares all parameters for reading and generating tokens.
During reinforcement learning, an agent A at- tempts to improve its parameters from conversa- tions with another agent B. While the other agent B could be a human, in our experiments we used our ï¬xed supervised model that was trained to im- itate humans. The second model is ï¬xed as we found that updating the parameters of both agents led to divergence from human language. In effect,
read: You get one book and Iâll take every- thing else. write: Great deal, thanks! write: No way, I need all 3 hats read: Any time read: No problem read: Iâll give you 2 read: Ok, ï¬ne choose: 1x book choose: 1x book choose: 2x hat choose: 3x hat 1 1 6 9 Dialogue history Candidate responses Simulation of rest of dialogue Score
Figure 4: Decoding through rollouts: The model ï¬rst generates a small set of candidate responses. For each candidate it simulates the future conversation by sampling, and estimates the expected future reward by averaging the scores. The system outputs the candidate with the highest expected reward. | 1706.05125#16 | Deal or No Deal? End-to-End Learning for Negotiation Dialogues | Much of human dialogue occurs in semi-cooperative settings, where agents with
different goals attempt to agree on common decisions. Negotiations require
complex communication and reasoning skills, but success is easy to measure,
making this an interesting task for AI. We gather a large dataset of
human-human negotiations on a multi-issue bargaining task, where agents who
cannot observe each other's reward functions must reach an agreement (or a
deal) via natural language dialogue. For the first time, we show it is possible
to train end-to-end models for negotiation, which must learn both linguistic
and reasoning skills with no annotated dialogue states. We also introduce
dialogue rollouts, in which the model plans ahead by simulating possible
complete continuations of the conversation, and find that this technique
dramatically improves performance. Our code and dataset are publicly available
(https://github.com/facebookresearch/end-to-end-negotiator). | http://arxiv.org/pdf/1706.05125 | Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, Dhruv Batra | cs.AI, cs.CL | null | null | cs.AI | 20170616 | 20170616 | [
{
"id": "1606.03152"
},
{
"id": "1510.03055"
},
{
"id": "1703.04908"
},
{
"id": "1511.08099"
},
{
"id": "1604.04562"
},
{
"id": "1611.08669"
},
{
"id": "1703.06585"
},
{
"id": "1606.01541"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1706.05125 | 17 | agent A learns to improve by simulating conversa- tions with the help of a surrogate forward model. Agent A reads its goals g and then generates tokens x0..n by sampling from pθ. When x gener- ates an end-of-turn marker, it then reads in tokens xn+1..m generated by agent B. These turns alter- nate until one agent emits a token ending the di- alogue. Both agents then output a decision o and collect a reward from the environment (which will be 0 if they output different decisions). We denote the subset of tokens generated by A as X A (e.g. tokens with incoming arrows in Figure 3b).
After a complete dialogue has been generated, we update agent Aâs parameters based on the out- come of the negotiation. Let rA be the score agent A achieved in the completed dialogue, T be the length of the dialogue, γ be a discount factor that rewards actions at the end of the dialogue more strongly, and µ be a running average of completed dialogue rewards so far2. We deï¬ne the future re- ward R for an action xt â X A as follows: | 1706.05125#17 | Deal or No Deal? End-to-End Learning for Negotiation Dialogues | Much of human dialogue occurs in semi-cooperative settings, where agents with
different goals attempt to agree on common decisions. Negotiations require
complex communication and reasoning skills, but success is easy to measure,
making this an interesting task for AI. We gather a large dataset of
human-human negotiations on a multi-issue bargaining task, where agents who
cannot observe each other's reward functions must reach an agreement (or a
deal) via natural language dialogue. For the first time, we show it is possible
to train end-to-end models for negotiation, which must learn both linguistic
and reasoning skills with no annotated dialogue states. We also introduce
dialogue rollouts, in which the model plans ahead by simulating possible
complete continuations of the conversation, and find that this technique
dramatically improves performance. Our code and dataset are publicly available
(https://github.com/facebookresearch/end-to-end-negotiator). | http://arxiv.org/pdf/1706.05125 | Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, Dhruv Batra | cs.AI, cs.CL | null | null | cs.AI | 20170616 | 20170616 | [
{
"id": "1606.03152"
},
{
"id": "1510.03055"
},
{
"id": "1703.04908"
},
{
"id": "1511.08099"
},
{
"id": "1604.04562"
},
{
"id": "1611.08669"
},
{
"id": "1703.06585"
},
{
"id": "1606.01541"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1706.05125 | 18 | Algorithm 1 Dialogue Rollouts algorithm. 1: procedure ROLLOUT(x0..i, g) 2: uâ â â
for c â {1..C} do â² C candidate moves 3: 4: 5: j â i do â² Rollout to end of turn 6: 7: 8: 9: 10: 11: 12: j â j + 1 xj â¼ pθ(xj|x0..jâ1, g) while xk /â {read:, choose:} â² u is candidate move u â xi+1..xj for s â {1..S} do â² S samples per move â² Start rollout from end of u k â j while xk 6= choose: do â² Rollout to end of dialogue 13: 14: k â k + 1 xk â¼ pθ(xk|x0..kâ1, g) 15: 16: â² Calculate rollout output and reward o â argmaxoâ²âO p(oâ²|x0..k, g) R(u) â R(u) + r(o)p(oâ²|x0..k, g) 17: if R(u) > | 1706.05125#18 | Deal or No Deal? End-to-End Learning for Negotiation Dialogues | Much of human dialogue occurs in semi-cooperative settings, where agents with
different goals attempt to agree on common decisions. Negotiations require
complex communication and reasoning skills, but success is easy to measure,
making this an interesting task for AI. We gather a large dataset of
human-human negotiations on a multi-issue bargaining task, where agents who
cannot observe each other's reward functions must reach an agreement (or a
deal) via natural language dialogue. For the first time, we show it is possible
to train end-to-end models for negotiation, which must learn both linguistic
and reasoning skills with no annotated dialogue states. We also introduce
dialogue rollouts, in which the model plans ahead by simulating possible
complete continuations of the conversation, and find that this technique
dramatically improves performance. Our code and dataset are publicly available
(https://github.com/facebookresearch/end-to-end-negotiator). | http://arxiv.org/pdf/1706.05125 | Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, Dhruv Batra | cs.AI, cs.CL | null | null | cs.AI | 20170616 | 20170616 | [
{
"id": "1606.03152"
},
{
"id": "1510.03055"
},
{
"id": "1703.04908"
},
{
"id": "1511.08099"
},
{
"id": "1604.04562"
},
{
"id": "1611.08669"
},
{
"id": "1703.06585"
},
{
"id": "1606.01541"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1706.05125 | 20 | R(xt) = X xtâX A γT ât(rA(o) â µ) (13)
We then optimise the expected reward of each action xt â X A:
# 5 Goal-based Decoding
LRL θ = Extâ¼pθ(xt|x0..tâ1,g)[R(xt)] (14)
The gradient of LRL FORCE (Williams, 1992): is calculated as in REIN- θ
âθLRL θ =X xtâX A Ext[R(xt)âθ log(pθ(xt|x0..tâ1, g))]
(15)
Likelihood-based decoding (§3.3) may not be op- timal. For instance, an agent may be choosing be- tween accepting an offer, or making a counter of- fer. The former will often have a higher likelihood under our model, as there are fewer ways to agree than to make another offer, but the latter may lead to a better outcome. Goal-based decoding also al- lows more complex dialogue strategies. For exam- ple, a deceptive utterance is likely to have a low model score (as users were generally honest in the supervised data), but may achieve high reward. | 1706.05125#20 | Deal or No Deal? End-to-End Learning for Negotiation Dialogues | Much of human dialogue occurs in semi-cooperative settings, where agents with
different goals attempt to agree on common decisions. Negotiations require
complex communication and reasoning skills, but success is easy to measure,
making this an interesting task for AI. We gather a large dataset of
human-human negotiations on a multi-issue bargaining task, where agents who
cannot observe each other's reward functions must reach an agreement (or a
deal) via natural language dialogue. For the first time, we show it is possible
to train end-to-end models for negotiation, which must learn both linguistic
and reasoning skills with no annotated dialogue states. We also introduce
dialogue rollouts, in which the model plans ahead by simulating possible
complete continuations of the conversation, and find that this technique
dramatically improves performance. Our code and dataset are publicly available
(https://github.com/facebookresearch/end-to-end-negotiator). | http://arxiv.org/pdf/1706.05125 | Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, Dhruv Batra | cs.AI, cs.CL | null | null | cs.AI | 20170616 | 20170616 | [
{
"id": "1606.03152"
},
{
"id": "1510.03055"
},
{
"id": "1703.04908"
},
{
"id": "1511.08099"
},
{
"id": "1604.04562"
},
{
"id": "1611.08669"
},
{
"id": "1703.06585"
},
{
"id": "1606.01541"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1706.05125 | 21 | 2As all rewards are non-negative, we instead re-scale them by subtracting the mean reward found during self play. Shift- ing in this way can reduce the variance of our estimator.
We instead explore decoding by maximising ex- pected reward. We achieve this by using pθ as a
forward model for the complete dialogue, and then deterministically computing the reward. Rewards for an utterance are averaged over samples to cal- culate expected future reward (Figure 4). | 1706.05125#21 | Deal or No Deal? End-to-End Learning for Negotiation Dialogues | Much of human dialogue occurs in semi-cooperative settings, where agents with
different goals attempt to agree on common decisions. Negotiations require
complex communication and reasoning skills, but success is easy to measure,
making this an interesting task for AI. We gather a large dataset of
human-human negotiations on a multi-issue bargaining task, where agents who
cannot observe each other's reward functions must reach an agreement (or a
deal) via natural language dialogue. For the first time, we show it is possible
to train end-to-end models for negotiation, which must learn both linguistic
and reasoning skills with no annotated dialogue states. We also introduce
dialogue rollouts, in which the model plans ahead by simulating possible
complete continuations of the conversation, and find that this technique
dramatically improves performance. Our code and dataset are publicly available
(https://github.com/facebookresearch/end-to-end-negotiator). | http://arxiv.org/pdf/1706.05125 | Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, Dhruv Batra | cs.AI, cs.CL | null | null | cs.AI | 20170616 | 20170616 | [
{
"id": "1606.03152"
},
{
"id": "1510.03055"
},
{
"id": "1703.04908"
},
{
"id": "1511.08099"
},
{
"id": "1604.04562"
},
{
"id": "1611.08669"
},
{
"id": "1703.06585"
},
{
"id": "1606.01541"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1706.05125 | 22 | forward model for the complete dialogue, and then deterministically computing the reward. Rewards for an utterance are averaged over samples to cal- culate expected future reward (Figure 4).
We use a two stage process: First, we gener- ate c candidate utterances U = u0..c, represent- ing possible complete turns that the agent could make, which are generated by sampling from pθ until the end-of-turn token is reached. Let x0..nâ1 be current dialogue history. We then calculate the expected reward R(u) of candidate utterance u = xn,n+k by repeatedly sampling xn+k+1,T from pθ, then choosing the best output o using Equation 12, and ï¬nally deterministically comput- ing the reward r(o). The reward is scaled by the probability of the output given the dialogue, be- cause if the agents select different outputs then they both receive 0 reward. R(xn..n+k) = Ex(n+k+1..T ;o)â¼pθ [r(o)pθ(o|x0..T )] (16)
We then return the utterance maximizing R.
uâ = argmax R(u) uâU (17)
We use 5 rollouts for each of 10 candidate turns.
# 6 Experiments
# 6.1 Training Details | 1706.05125#22 | Deal or No Deal? End-to-End Learning for Negotiation Dialogues | Much of human dialogue occurs in semi-cooperative settings, where agents with
different goals attempt to agree on common decisions. Negotiations require
complex communication and reasoning skills, but success is easy to measure,
making this an interesting task for AI. We gather a large dataset of
human-human negotiations on a multi-issue bargaining task, where agents who
cannot observe each other's reward functions must reach an agreement (or a
deal) via natural language dialogue. For the first time, we show it is possible
to train end-to-end models for negotiation, which must learn both linguistic
and reasoning skills with no annotated dialogue states. We also introduce
dialogue rollouts, in which the model plans ahead by simulating possible
complete continuations of the conversation, and find that this technique
dramatically improves performance. Our code and dataset are publicly available
(https://github.com/facebookresearch/end-to-end-negotiator). | http://arxiv.org/pdf/1706.05125 | Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, Dhruv Batra | cs.AI, cs.CL | null | null | cs.AI | 20170616 | 20170616 | [
{
"id": "1606.03152"
},
{
"id": "1510.03055"
},
{
"id": "1703.04908"
},
{
"id": "1511.08099"
},
{
"id": "1604.04562"
},
{
"id": "1611.08669"
},
{
"id": "1703.06585"
},
{
"id": "1606.01541"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1706.05125 | 23 | We use 5 rollouts for each of 10 candidate turns.
# 6 Experiments
# 6.1 Training Details
We implement our models using PyTorch. All hyper-parameters were chosen on a development dataset. The input tokens are embedded into a 64-dimensional space, while the dialogue tokens are embedded with 256-dimensional embeddings (with no pre-training). The input GRUg has a hidden layer of size 64 and the dialogue GRUw is of size 128. The output GRUââo and GRUââo both have a hidden state of size 256, the size of hs is 256 as well. During supervised training, we optimise using stochastic gradient descent with a minibatch size of 16, an initial learning rate of 1.0, Nesterov momentum with µ=0.1 (Nesterov, 1983), and clipping gradients whose L2 norm ex- ceeds 0.5. We train the model for 30 epochs and pick the snapshot of the model with the best val- idation perplexity. We then annealed the learn- ing rate by a factor of 5 each epoch. We weight the terms in the loss function (Equation 10) using α=0.5. We do not train against output decisions where humans selected different agreements. To- kens occurring fewer than 20 times are replaced with an âunknownâ token. | 1706.05125#23 | Deal or No Deal? End-to-End Learning for Negotiation Dialogues | Much of human dialogue occurs in semi-cooperative settings, where agents with
different goals attempt to agree on common decisions. Negotiations require
complex communication and reasoning skills, but success is easy to measure,
making this an interesting task for AI. We gather a large dataset of
human-human negotiations on a multi-issue bargaining task, where agents who
cannot observe each other's reward functions must reach an agreement (or a
deal) via natural language dialogue. For the first time, we show it is possible
to train end-to-end models for negotiation, which must learn both linguistic
and reasoning skills with no annotated dialogue states. We also introduce
dialogue rollouts, in which the model plans ahead by simulating possible
complete continuations of the conversation, and find that this technique
dramatically improves performance. Our code and dataset are publicly available
(https://github.com/facebookresearch/end-to-end-negotiator). | http://arxiv.org/pdf/1706.05125 | Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, Dhruv Batra | cs.AI, cs.CL | null | null | cs.AI | 20170616 | 20170616 | [
{
"id": "1606.03152"
},
{
"id": "1510.03055"
},
{
"id": "1703.04908"
},
{
"id": "1511.08099"
},
{
"id": "1604.04562"
},
{
"id": "1611.08669"
},
{
"id": "1703.06585"
},
{
"id": "1606.01541"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1706.05125 | 24 | During reinforcement learning, we use a learn- ing rate of 0.1, clip gradients above 1.0, and use a discount factor of γ=0.95. After every 4 rein- forcement learning updates, we make a supervised update with mini-batch size 16 and learning rate 0.5, and we clip gradients at 1.0. We used 4086 simulated conversations.
When sampling words from pθ, we reduce the variance by doubling the values of logits (i.e. us- ing temperature of 0.5).
# 6.2 Comparison Systems
We compare the performance of the following: LIKELIHOOD uses supervised training and decod- ing (§3), RL is ï¬ne-tuned with goal-based self- play (§4), ROLLOUTS uses supervised training combined with goal-based decoding using rollouts (§5), and RL+ROLLOUTS uses rollouts with a base model trained with reinforcement learning.
# 6.3 Intrinsic Evaluation
For development, we use measured the perplexity of user generated utterances, conditioned on the input and previous dialogue. | 1706.05125#24 | Deal or No Deal? End-to-End Learning for Negotiation Dialogues | Much of human dialogue occurs in semi-cooperative settings, where agents with
different goals attempt to agree on common decisions. Negotiations require
complex communication and reasoning skills, but success is easy to measure,
making this an interesting task for AI. We gather a large dataset of
human-human negotiations on a multi-issue bargaining task, where agents who
cannot observe each other's reward functions must reach an agreement (or a
deal) via natural language dialogue. For the first time, we show it is possible
to train end-to-end models for negotiation, which must learn both linguistic
and reasoning skills with no annotated dialogue states. We also introduce
dialogue rollouts, in which the model plans ahead by simulating possible
complete continuations of the conversation, and find that this technique
dramatically improves performance. Our code and dataset are publicly available
(https://github.com/facebookresearch/end-to-end-negotiator). | http://arxiv.org/pdf/1706.05125 | Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, Dhruv Batra | cs.AI, cs.CL | null | null | cs.AI | 20170616 | 20170616 | [
{
"id": "1606.03152"
},
{
"id": "1510.03055"
},
{
"id": "1703.04908"
},
{
"id": "1511.08099"
},
{
"id": "1604.04562"
},
{
"id": "1611.08669"
},
{
"id": "1703.06585"
},
{
"id": "1606.01541"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1706.05125 | 25 | # 6.3 Intrinsic Evaluation
For development, we use measured the perplexity of user generated utterances, conditioned on the input and previous dialogue.
Results are shown in Table 3, and show that the simple LIKELIHOOD model produces the most human-like responses, and the alternative training and decoding strategies cause a divergence from human language. Note however, that this diver- gence may not necessarily correspond to lower quality languageâit may also indicate different strategic decisions about what to say. Results in §6.4 show all models could converse with humans.
# 6.4 End-to-End Evaluation
We measure end-to-end performance in dialogues both with the likelihood-based agent and with hu- mans on Mechanical Turk, on held out scenarios. Humans were told that they were interacting with other humans, as they had been during the collection of our dataset (and few appeared to re- alize they were in conversation with machines).
We measure the following statistics:
Score: The average score for each agent (which could be a human or model), out of 10. Agreement: The percentage of dialogues where both agents agreed on the same decision. Pareto Optimality: The percentage of Pareto optimal solutions for agreed deals (a solution is Pareto optimal if neither agentâs score can be im- proved without lowering the otherâs score). Lower scores indicate inefï¬cient negotiations. | 1706.05125#25 | Deal or No Deal? End-to-End Learning for Negotiation Dialogues | Much of human dialogue occurs in semi-cooperative settings, where agents with
different goals attempt to agree on common decisions. Negotiations require
complex communication and reasoning skills, but success is easy to measure,
making this an interesting task for AI. We gather a large dataset of
human-human negotiations on a multi-issue bargaining task, where agents who
cannot observe each other's reward functions must reach an agreement (or a
deal) via natural language dialogue. For the first time, we show it is possible
to train end-to-end models for negotiation, which must learn both linguistic
and reasoning skills with no annotated dialogue states. We also introduce
dialogue rollouts, in which the model plans ahead by simulating possible
complete continuations of the conversation, and find that this technique
dramatically improves performance. Our code and dataset are publicly available
(https://github.com/facebookresearch/end-to-end-negotiator). | http://arxiv.org/pdf/1706.05125 | Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, Dhruv Batra | cs.AI, cs.CL | null | null | cs.AI | 20170616 | 20170616 | [
{
"id": "1606.03152"
},
{
"id": "1510.03055"
},
{
"id": "1703.04908"
},
{
"id": "1511.08099"
},
{
"id": "1604.04562"
},
{
"id": "1611.08669"
},
{
"id": "1703.06585"
},
{
"id": "1606.01541"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1706.05125 | 26 | vs. Human vs. LIKELIHOOD % Agreed 87.9 89.9 92.9 94.4 % Pareto Optimal 66.2 69.1 78.3 82.4 % Agreed 76.5 67.3 72.1 57.2 Score (agreed) 6.2 vs. 6.2 7.9 vs. 4.7 7.9 vs. 5.5 8.8 vs. 4.5 Score (all) 5.4 vs. 5.5 7.1 vs. 4.2 7.3 vs. 5.1 8.3 vs. 4.2 Score (all) 4.7 vs. 5.8 4.3 vs. 5.0 5.2 vs. 5.4 4.6 vs. 4.2 Score (agreed) 6.2 vs. 7.6 6.4 vs. 7.5 7.1 vs. 7.4 8.0 vs. 7.1 % Pareto Optimal 49.6 58.6 63.7 74.8 Model LIKELIHOOD RL ROLLOUTS RL+ROLLOUTS
Table 1: End task evaluation on heldout scenarios, against the LIKELIHOOD model and humans from Mechanical Turk. The maximum score is 10. Score (all) gives 0 points when agents failed to agree. | 1706.05125#26 | Deal or No Deal? End-to-End Learning for Negotiation Dialogues | Much of human dialogue occurs in semi-cooperative settings, where agents with
different goals attempt to agree on common decisions. Negotiations require
complex communication and reasoning skills, but success is easy to measure,
making this an interesting task for AI. We gather a large dataset of
human-human negotiations on a multi-issue bargaining task, where agents who
cannot observe each other's reward functions must reach an agreement (or a
deal) via natural language dialogue. For the first time, we show it is possible
to train end-to-end models for negotiation, which must learn both linguistic
and reasoning skills with no annotated dialogue states. We also introduce
dialogue rollouts, in which the model plans ahead by simulating possible
complete continuations of the conversation, and find that this technique
dramatically improves performance. Our code and dataset are publicly available
(https://github.com/facebookresearch/end-to-end-negotiator). | http://arxiv.org/pdf/1706.05125 | Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, Dhruv Batra | cs.AI, cs.CL | null | null | cs.AI | 20170616 | 20170616 | [
{
"id": "1606.03152"
},
{
"id": "1510.03055"
},
{
"id": "1703.04908"
},
{
"id": "1511.08099"
},
{
"id": "1604.04562"
},
{
"id": "1611.08669"
},
{
"id": "1703.06585"
},
{
"id": "1606.01541"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
1706.05125 | 27 | Metric Number of Dialogues Average Turns per Dialogue Average Words per Turn % Agreed Average Score (/10) % Pareto Optimal Dataset 5808 6.6 7.6 80.1 6.0 76.9
# 7 Analysis
Table 1 shows large gains from goal-based meth- ods. In this section, we explore the strengths and weaknesses of our models.
Table 2: sourced dialogues between humans. Statistics on our dataset of crowdGoal-based models negotiate harder. The RL+ROLLOUTS model has much longer dialogues with humans than LIKELIHOOD (7.2 turns vs. 5.3 on average), indicating that the model is accepting deals less quickly, and negotiating harder.
Model LIKELIHOOD Valid PPL Test PPL Test Avg. Rank 5.47 5.86 - - 5.62 6.03 - - 521.8 517.6 844.1 859.8 RL ROLLOUTS RL+ROLLOUTS
Table 3: Intrinsic evaluation showing the average perplexity of tokens and rank of complete turns (out of 2083 unique human messages from the test set). Lower is more human-like for both. | 1706.05125#27 | Deal or No Deal? End-to-End Learning for Negotiation Dialogues | Much of human dialogue occurs in semi-cooperative settings, where agents with
different goals attempt to agree on common decisions. Negotiations require
complex communication and reasoning skills, but success is easy to measure,
making this an interesting task for AI. We gather a large dataset of
human-human negotiations on a multi-issue bargaining task, where agents who
cannot observe each other's reward functions must reach an agreement (or a
deal) via natural language dialogue. For the first time, we show it is possible
to train end-to-end models for negotiation, which must learn both linguistic
and reasoning skills with no annotated dialogue states. We also introduce
dialogue rollouts, in which the model plans ahead by simulating possible
complete continuations of the conversation, and find that this technique
dramatically improves performance. Our code and dataset are publicly available
(https://github.com/facebookresearch/end-to-end-negotiator). | http://arxiv.org/pdf/1706.05125 | Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, Dhruv Batra | cs.AI, cs.CL | null | null | cs.AI | 20170616 | 20170616 | [
{
"id": "1606.03152"
},
{
"id": "1510.03055"
},
{
"id": "1703.04908"
},
{
"id": "1511.08099"
},
{
"id": "1604.04562"
},
{
"id": "1611.08669"
},
{
"id": "1703.06585"
},
{
"id": "1606.01541"
},
{
"id": "1605.07683"
},
{
"id": "1506.05869"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.