LVLN: A Landmark-Based Deep Neural Network Model for Vision-and-Language Navigation
KIPS Transactions on Software and Data Engineering, Vol. 8, No. 9, pp. 379-390, Sep. 2019
https://doi.org/10.3745/KTSDE.2019.8.9.379, PDF Download:
Keywords: Vision-and-Language Navigation, deep neural network, Landmark, Attention, Progress Monitor
Abstract
Statistics
Show / Hide Statistics
Statistics (Cumulative Counts from September 1st, 2017)
Multiple requests among the same browser session are counted as one view.
If you mouse over a chart, the values of data points will be shown.
Statistics (Cumulative Counts from September 1st, 2017)
Multiple requests among the same browser session are counted as one view.
If you mouse over a chart, the values of data points will be shown.
|
Cite this article
[IEEE Style]
J. Hwang and I. Kim, "LVLN: A Landmark-Based Deep Neural Network Model for Vision-and-Language Navigation," KIPS Transactions on Software and Data Engineering, vol. 8, no. 9, pp. 379-390, 2019. DOI: https://doi.org/10.3745/KTSDE.2019.8.9.379.
[ACM Style]
Jisu Hwang and Incheol Kim. 2019. LVLN: A Landmark-Based Deep Neural Network Model for Vision-and-Language Navigation. KIPS Transactions on Software and Data Engineering, 8, 9, (2019), 379-390. DOI: https://doi.org/10.3745/KTSDE.2019.8.9.379.