Atnaujinkite slapukų nuostatas

DRM apribojimai

  • Kopijuoti:

    neleidžiama

  • Spausdinti:

    neleidžiama

  • El. knygos naudojimas:

    Skaitmeninių teisių valdymas (DRM)
    Leidykla pateikė šią knygą šifruota forma, o tai reiškia, kad norint ją atrakinti ir perskaityti reikia įdiegti nemokamą programinę įrangą. Norint skaityti šią el. knygą, turite susikurti Adobe ID . Daugiau informacijos  čia. El. knygą galima atsisiųsti į 6 įrenginius (vienas vartotojas su tuo pačiu Adobe ID).

    Reikalinga programinė įranga
    Norint skaityti šią el. knygą mobiliajame įrenginyje (telefone ar planšetiniame kompiuteryje), turite įdiegti šią nemokamą programėlę: PocketBook Reader (iOS / Android)

    Norint skaityti šią el. knygą asmeniniame arba „Mac“ kompiuteryje, Jums reikalinga  Adobe Digital Editions “ (tai nemokama programa, specialiai sukurta el. knygoms. Tai nėra tas pats, kas „Adobe Reader“, kurią tikriausiai jau turite savo kompiuteryje.)

    Negalite skaityti šios el. knygos naudodami „Amazon Kindle“.

In todays world, deep learning source codes and a plethora of open access geospatial images are readily available and easily accessible. However, most people are missing the educational tools to make use of this resource. Deep Learning for Remote Sensing Images with Open Source Software is the first practical book to introduce deep learning techniques using free open source tools for processing real world remote sensing images. The approaches detailed in this book are generic and can be adapted to suit many different applications for remote sensing image processing, including landcover mapping, forestry, urban studies, disaster mapping, image restoration, etc. Written with practitioners and students in mind, this book helps link together the theory and practical use of existing tools and data to apply deep learning techniques on remote sensing images and data.

Specific Features of this Book:











The first book that explains how to apply deep learning techniques to public, free available data (Spot-7 and Sentinel-2 images, OpenStreetMap vector data), using open source software (QGIS, Orfeo ToolBox, TensorFlow)





Presents approaches suited for real world images and data targeting large scale processing and GIS applications





Introduces state of the art deep learning architecture families that can be applied to remote sensing world, mainly for landcover mapping, but also for generic approaches (e.g. image restoration)





Suited for deep learning beginners and readers with some GIS knowledge. No coding knowledge is required to learn practical skills.





Includes deep learning techniques through many step by step remote sensing data processing exercises.
Preface ix
Author xi
I Backgrounds
1(18)
1 Deep learning background
3(6)
1.1 What is deep learning?
3(2)
1.2 Convolution
5(1)
1.3 Pooling
6(1)
1.4 Activation functions
7(1)
1.5 Challenges ahead for deep learning with remote sensing images
8(1)
2 Software
9(10)
2.1 Orfeo ToolBox
9(1)
2.1.1 Applications
10(1)
2.1.2 Streaming mechanism
10(1)
2.1.3 Remote modules
10(1)
2.1.4 The Python API
10(1)
2.2 Tensor Flow
10(2)
2.2.1 APIs
10(1)
2.2.2 Computations
11(1)
2.2.3 Graphs
12(1)
2.3 Orfeo ToolBox + TensorFlow = OTBTF
12(5)
2.3.1 Installation
13(1)
2.3.2 Featured applications
14(1)
2.3.3 Principle
14(2)
2.3.4 Multiple input sources and outputs
16(1)
2.4 QGIS
17(2)
II Patch-based classification
19(48)
3 Introduction
21(2)
4 Data used: the Tokyo dataset
23(4)
4.1 Description
23(1)
4.2 Remote sensing imagery
23(2)
4.3 Terrain truth
25(2)
5 A simple convolutional neural network
27(18)
5.1 Normalization
27(2)
5.2 Sampling
29(3)
5.2.1 Selection
29(1)
5.2.2 Extraction
30(2)
5.3 Training
32(3)
5.3.1 Principle
32(1)
5.3.2 Model architecture
33(1)
5.3.2.1 Input
34(1)
5.3.2.2 Layers
34(1)
5.3.2.3 Estimated class
34(1)
5.3.2.4 Loss function
34(1)
5.3.2.5 Optimizer
35(1)
5.4 Generate the model
35(2)
5.5 Train the model from scratch
37(3)
5.6 Comparison with Random Forest
40(1)
5.7 Inference
41(4)
6 Fully Convolutional Neural Network
45(6)
6.1 Using the existing model as an FCN
45(1)
6.2 Pixel-wise fully convolutional model
46(3)
6.3 Training
49(1)
6.4 Inference
50(1)
7 Classifiers on deep features
51(4)
7.1 Principle
51(1)
7.2 Overview of composite applications in OTB
52(1)
7.3 Training
53(1)
7.4 Inference
54(1)
8 Dealing with multiple sources
55(10)
8.1 More sources?
55(1)
8.2 Model with multiple inputs
56(3)
8.3 Normalization
59(1)
8.4 Sampling
60(1)
8.5 Training
61(4)
8.5.1 Inference
62(1)
8.5.1.1 Patch-based mode
62(1)
8.5.1.2 Fully convolutional mode
62(3)
9 Discussion
65(2)
III Semantic segmentation
67(34)
10 Semantic segmentation of optical imagery
69(4)
10.1 Introduction
69(1)
10.2 Overview
69(4)
11 Data used: the Amsterdam dataset
73(10)
11.1 Description
73(1)
11.2 Spot-7 image
73(1)
11.3 OpenStreetMap data
74(9)
11.3.1 OSM downloader plugin
75(1)
11.3.2 Download OSM data
75(2)
11.3.3 Prepare the vector layer
77(6)
12 Mapping buildings
83(16)
12.1 Input data pre-processing
83(7)
12.1.1 Satellite image pansharpening
83(1)
12.1.2 Image normalization
84(1)
12.1.3 Sample selection
84(1)
12.1.3.1 Patch position seeding
84(2)
12.1.3.2 Patch position selection
86(1)
12.1.3.3 Patches split
87(1)
12.1.4 Rasterization
88(1)
12.1.5 Patch extraction
89(1)
12.2 Building the model
90(6)
12.2.1 Architecture
91(1)
12.2.2 Implementation
92(2)
12.2.2.1 Exact output
94(1)
12.2.2.2 Expression field
95(1)
12.2.3 Generate the Saved Model
95(1)
12.3 Training the model
96(1)
12.4 Inference
96(3)
13 Discussion
99(2)
IV Image restoration
101(44)
14 Gapfilling of optical images: principle
103(6)
14.1 Introduction
103(1)
14.2 Method
103(2)
14.3 Architecture
105(4)
14.3.1 Encoder
106(1)
14.3.2 Decoder
106(1)
14.3.3 Loss
107(2)
15 The Marmande dataset
109(6)
15.1 Description
109(1)
15.2 Sentinel-2 images
109(3)
15.3 Sentinel-1 image
112(3)
16 Pre-processing
115(18)
16.1 Sentinel images
115(4)
16.1.1 Optical images
115(2)
16.1.2 SAR image
117(1)
16.1.2.1 Calibration
117(1)
16.1.2.2 Filtering values
118(1)
16.1.2.3 Linear stretch
118(1)
16.1.2.4 Spatial resampling
118(1)
16.2 Patches
119(9)
16.2.1 Patch position seeding
119(1)
16.2.1.1 Sentinel-2 image masks
119(3)
16.2.1.2 Merge masks
122(1)
16.2.1.3 Grid generation
123(1)
16.2.1.4 Grid filtering
124(2)
16.2.1.5 Patch centroids
126(1)
16.2.1.6 Training and validation datasets
126(1)
16.2.2 Extraction of patches
127(1)
16.3 More: automate steps with the OTB Python API
128(5)
16.3.1 Build the pipeline
129(3)
16.3.2 Run the pipeline
132(1)
17 Model training
133(4)
17.1 Training from Python
133(1)
17.2 Get the code
134(1)
17.3 Use the code
134(1)
17.3.1 Description
134(1)
17.3.2 Parameters
134(1)
17.4 Export the model
135(2)
18 Inference
137(6)
18.1 Inputs and outputs
137(1)
18.2 Generating the image
137(2)
18.3 Postprocessing
139(4)
19 Discussion
143(2)
Bibliography 145(4)
Index 149
Remi Cresson received the M. Sc. in electrical engineering from the Grenoble Institute of Technology, France, 2009. He is with the Land, Environment, Remote Sensing and Spatial Information Joint Research Unit (UMR TETIS), at the French Research Institute of Science and Technology for Environment and Agriculture (Irstea), Montpellier, France. His research and engineering interests include remote sensing image processing, High Performance Computing, and geospatial data inter-operability. He is member of the Orfeo ToolBox Project Steering Committee and charter member of the Open source geospatial foundation (OSGEO).