Preface |
|
ix | |
Author |
|
xi | |
|
|
1 | (18) |
|
1 Deep learning background |
|
|
3 | (6) |
|
1.1 What is deep learning? |
|
|
3 | (2) |
|
|
5 | (1) |
|
|
6 | (1) |
|
|
7 | (1) |
|
1.5 Challenges ahead for deep learning with remote sensing images |
|
|
8 | (1) |
|
|
9 | (10) |
|
|
9 | (1) |
|
|
10 | (1) |
|
2.1.2 Streaming mechanism |
|
|
10 | (1) |
|
|
10 | (1) |
|
|
10 | (1) |
|
|
10 | (2) |
|
|
10 | (1) |
|
|
11 | (1) |
|
|
12 | (1) |
|
2.3 Orfeo ToolBox + TensorFlow = OTBTF |
|
|
12 | (5) |
|
|
13 | (1) |
|
2.3.2 Featured applications |
|
|
14 | (1) |
|
|
14 | (2) |
|
2.3.4 Multiple input sources and outputs |
|
|
16 | (1) |
|
|
17 | (2) |
|
II Patch-based classification |
|
|
19 | (48) |
|
|
21 | (2) |
|
4 Data used: the Tokyo dataset |
|
|
23 | (4) |
|
|
23 | (1) |
|
4.2 Remote sensing imagery |
|
|
23 | (2) |
|
|
25 | (2) |
|
5 A simple convolutional neural network |
|
|
27 | (18) |
|
|
27 | (2) |
|
|
29 | (3) |
|
|
29 | (1) |
|
|
30 | (2) |
|
|
32 | (3) |
|
|
32 | (1) |
|
|
33 | (1) |
|
|
34 | (1) |
|
|
34 | (1) |
|
|
34 | (1) |
|
|
34 | (1) |
|
|
35 | (1) |
|
|
35 | (2) |
|
5.5 Train the model from scratch |
|
|
37 | (3) |
|
5.6 Comparison with Random Forest |
|
|
40 | (1) |
|
|
41 | (4) |
|
6 Fully Convolutional Neural Network |
|
|
45 | (6) |
|
6.1 Using the existing model as an FCN |
|
|
45 | (1) |
|
6.2 Pixel-wise fully convolutional model |
|
|
46 | (3) |
|
|
49 | (1) |
|
|
50 | (1) |
|
7 Classifiers on deep features |
|
|
51 | (4) |
|
|
51 | (1) |
|
7.2 Overview of composite applications in OTB |
|
|
52 | (1) |
|
|
53 | (1) |
|
|
54 | (1) |
|
8 Dealing with multiple sources |
|
|
55 | (10) |
|
|
55 | (1) |
|
8.2 Model with multiple inputs |
|
|
56 | (3) |
|
|
59 | (1) |
|
|
60 | (1) |
|
|
61 | (4) |
|
|
62 | (1) |
|
|
62 | (1) |
|
8.5.1.2 Fully convolutional mode |
|
|
62 | (3) |
|
|
65 | (2) |
|
III Semantic segmentation |
|
|
67 | (34) |
|
10 Semantic segmentation of optical imagery |
|
|
69 | (4) |
|
|
69 | (1) |
|
|
69 | (4) |
|
11 Data used: the Amsterdam dataset |
|
|
73 | (10) |
|
|
73 | (1) |
|
|
73 | (1) |
|
|
74 | (9) |
|
11.3.1 OSM downloader plugin |
|
|
75 | (1) |
|
|
75 | (2) |
|
11.3.3 Prepare the vector layer |
|
|
77 | (6) |
|
|
83 | (16) |
|
12.1 Input data pre-processing |
|
|
83 | (7) |
|
12.1.1 Satellite image pansharpening |
|
|
83 | (1) |
|
12.1.2 Image normalization |
|
|
84 | (1) |
|
|
84 | (1) |
|
12.1.3.1 Patch position seeding |
|
|
84 | (2) |
|
12.1.3.2 Patch position selection |
|
|
86 | (1) |
|
|
87 | (1) |
|
|
88 | (1) |
|
|
89 | (1) |
|
|
90 | (6) |
|
|
91 | (1) |
|
|
92 | (2) |
|
|
94 | (1) |
|
12.2.2.2 Expression field |
|
|
95 | (1) |
|
12.2.3 Generate the Saved Model |
|
|
95 | (1) |
|
|
96 | (1) |
|
|
96 | (3) |
|
|
99 | (2) |
|
|
101 | (44) |
|
14 Gapfilling of optical images: principle |
|
|
103 | (6) |
|
|
103 | (1) |
|
|
103 | (2) |
|
|
105 | (4) |
|
|
106 | (1) |
|
|
106 | (1) |
|
|
107 | (2) |
|
|
109 | (6) |
|
|
109 | (1) |
|
|
109 | (3) |
|
|
112 | (3) |
|
|
115 | (18) |
|
|
115 | (4) |
|
|
115 | (2) |
|
|
117 | (1) |
|
|
117 | (1) |
|
16.1.2.2 Filtering values |
|
|
118 | (1) |
|
|
118 | (1) |
|
16.1.2.4 Spatial resampling |
|
|
118 | (1) |
|
|
119 | (9) |
|
16.2.1 Patch position seeding |
|
|
119 | (1) |
|
16.2.1.1 Sentinel-2 image masks |
|
|
119 | (3) |
|
|
122 | (1) |
|
|
123 | (1) |
|
|
124 | (2) |
|
|
126 | (1) |
|
16.2.1.6 Training and validation datasets |
|
|
126 | (1) |
|
16.2.2 Extraction of patches |
|
|
127 | (1) |
|
16.3 More: automate steps with the OTB Python API |
|
|
128 | (5) |
|
16.3.1 Build the pipeline |
|
|
129 | (3) |
|
|
132 | (1) |
|
|
133 | (4) |
|
17.1 Training from Python |
|
|
133 | (1) |
|
|
134 | (1) |
|
|
134 | (1) |
|
|
134 | (1) |
|
|
134 | (1) |
|
|
135 | (2) |
|
|
137 | (6) |
|
|
137 | (1) |
|
18.2 Generating the image |
|
|
137 | (2) |
|
|
139 | (4) |
|
|
143 | (2) |
Bibliography |
|
145 | (4) |
Index |
|
149 | |