Atnaujinkite slapukų nuostatas

Evaluating Software Architectures: Methods and Case Studies [Kietas viršelis]

3.68/5 (88 ratings by Goodreads)
  • Formatas: Hardback, 368 pages, aukštis x plotis x storis: 238x162x24 mm, weight: 640 g
  • Serija: SEI Series in Software Engineering
  • Išleidimo metai: 08-Nov-2001
  • Leidėjas: Addison-Wesley Educational Publishers Inc
  • ISBN-10: 020170482X
  • ISBN-13: 9780201704822
  • Formatas: Hardback, 368 pages, aukštis x plotis x storis: 238x162x24 mm, weight: 640 g
  • Serija: SEI Series in Software Engineering
  • Išleidimo metai: 08-Nov-2001
  • Leidėjas: Addison-Wesley Educational Publishers Inc
  • ISBN-10: 020170482X
  • ISBN-13: 9780201704822
The foundation of any software system is its architecture. Using this book, you can evaluate every aspect of architecture in advance, at remarkably low cost -- identifying improvements that can dramatically improve any system's performance, security, reliability, and maintainability. As the practice of software architecture has matured, it has become possible to identify causal connections between architectural design decisions and the qualities and properties that result downstream in the systems that follow from them. This book shows how, offering step-by-step guidance, as well as detailed practical examples -- complete with sample artifacts reflective of those that evaluators will encounter. The techniques presented here are applicable not only to software architectures, but also to system architectures encompassing computing hardware, networking equipment, and other elements. For all software architects, software engineers, developers, IT managers, and others responsible for creating, evaluating, or implementing software architectures.

Daugiau informacijos

The foundation of any software system is its architecture. Using this book, you can evaluate every aspect of architecture in advance, at remarkably low cost -- identifying improvements that can dramatically improve any system's performance, security, reliability, and maintainability. As the practice of software architecture has matured, it has become possible to identify causal connections between architectural design decisions and the qualities and properties that result downstream in the systems that follow from them. This book shows how, offering step-by-step guidance, as well as detailed practical examples -- complete with sample artifacts reflective of those that evaluators will encounter. The techniques presented here are applicable not only to software architectures, but also to system architectures encompassing computing hardware, networking equipment, and other elements. For all software architects, software engineers, developers, IT managers, and others responsible for creating, evaluating, or implementing software architectures.
List of Figures
xiii
List of Tables
xv
Preface xvii
Acknowledgments xxi
Reader's Guide xxiii
What Is Software Architecture?
1(18)
Architecture as a Vehicle for Communication among Stakeholders
3(8)
Architecture and Its Effects on Stakeholders
3(1)
Architectural Views
4(6)
Architecture Description Languages
10(1)
Architecture as the Manifestation of the Earliest Design Decisions
11(2)
Architectural Styles
12(1)
Architecture as a Reusable, Transferable Abstraction of a System
13(1)
Summary
14(1)
For Further Reading
15(1)
Discussion Questions
16(3)
Evaluating a Software Architecture
19(24)
Why Evaluate an Architecture?
23(1)
When Can an Architecture Be Evaluated?
24(2)
Who's Involved?
26(1)
What Result Does an Architecture Evaluation Produce?
27(3)
For What Qualities Can We Evaluate an Architecture?
30(2)
Why Are Qualities Attributes Too Vague for Analysis?
32(2)
What Are the Outputs of an Architecture Evaluation?
34(3)
Outputs from the ATAM, the SAAM, and ARID
34(1)
Outputs Only from the ATAM
35(2)
What Are the Benefits and Costs of Performing an Architecture Evaluation?
37(4)
For Further Reading
41(1)
Discussion Questions
42(1)
The ATAM---A Method for Architecture Evaluation
43(44)
Summary of the ATAM Steps
44(1)
Detailed Description of the ATAM Steps
45(25)
Present the ATAM
45(1)
Present the Business Drivers
46(1)
Present the Architecture
47(1)
Identify the Architectural Approaches
47(3)
Generate the Quality Attribute Utility Tree
50(6)
Analyze the Architectural Approaches
56(3)
Brainstorm and Prioritize Scenarios
59(9)
Analyze the Architectural Approaches
68(1)
Present the Results
68(2)
The Phases of the ATAM
70(14)
Phase 0 Activities
71(5)
Phase 1 Activities
76(1)
Phase 2 Activities
77(1)
Phase 3 Activities
78(6)
For Further Reading
84(1)
Discussion Questions
84(3)
The Battlefield Control System---The First Case Study in Applying the ATAM
87(22)
Preparation
88(1)
Phase 1
89(11)
Present the ATAM
89(1)
Present the Business Drivers
89(1)
Present the Architecture
89(1)
Identify the Architectural Approaches
90(2)
Generate the Quality Attribute Utility Tree
92(1)
Analyze the Architectural Approaches
92(8)
Phase 2
100(3)
Brainstorm and Prioritize Scenarios
100(2)
Analyze the Architectural Approaches
102(1)
Present the Results
102(1)
Result of the BCS Evaluation
103(4)
Documentation
103(2)
Requirements
105(1)
Sensitivities and Tradeoffs
106(1)
Architectural Risks
106(1)
Summary
107(1)
Discussion Questions
107(2)
Understanding Quality Attributes
109(18)
Quality Attribute Characterizations
110(11)
Performance
111(4)
Availability
115(3)
Modifiability
118(2)
Characterizations Inspire Questions
120(1)
Using Quality Attribute Characterizations in the ATAM
121(3)
Attribute-Based Architectural Styles
124(1)
Summary
125(1)
For Further Reading
126(1)
Discussion Questions
126(1)
A Case Study in Applying the ATAM
127(84)
Background
128(1)
Phase 0: Partnership and Preparation
129(19)
Phase 0, Step 1: Present the ATAM
130(2)
Phase 0, Step 2: Describe Candidate System
132(2)
Phase 0, Step 3: Make a Go/No-Go Decision
134(1)
Phase 0, Step 4: Negotiate the Statement of Work
135(2)
Phase 0, Step 5: Form the Core Evaluation Team
137(3)
Phase 0, Step 6: Hold Evaluation Team Kick-off Meeting
140(2)
Phase 0, Step 7: Prepare for Phase 1
142(5)
Phase 0, Step 8: Review the Architecture
147(1)
Phase 1: Initial Evaluation
148(35)
Phase 1, Step 1: Present the ATAM
149(3)
Phase 1, Step 2: Present Business Drivers
152(5)
Phase 1, Step 3: Present the Architecture
157(5)
Phase 1, Step 4: Identify Architectural Approaches
162(2)
Phase 1, Step 5: Generate Quality Attribute Utility Tree
164(8)
Phase 1, Step 6: Analyze the Architectural Approaches
172(11)
Hiatus between Phase 1 and Phase 2
183(1)
Phase 2: Complete Evaluation
184(18)
Phase 2, Step 0: Prepare for Phase 2
184(3)
Phase 2, Steps 1-6
187(1)
Phase 2, Step 7: Brainstorm and Prioritize Scenarios
187(9)
Phase 2, Step 8: Analyze Architectural Approaches
196(3)
Phase 2, Step 9: Present Results
199(3)
Phase 3: Follow-Up
202(7)
Phase 3, Step 1: Produce the Final Report
203(1)
Phase 3, Step 2: Hold the Postmortem Meeting
204(3)
Phase 3, Step 3: Build Portfolio and Update Artifact Respositories
207(2)
For Further Reading
209(1)
Discussion Questions
209(2)
Using the SAAM to Evaluate an Example Architecture
211(30)
Overview of the SAAM
212(2)
Inputs to a SAAM Evaluation
213(1)
Outputs from a SAAM Evaluation
213(1)
Steps of a SAAM Evaluation
214(6)
Step 1: Develop Scenarios
214(2)
Step 2: Describe the Architecture(s)
216(1)
Step 3: Classify and Prioritize the Scenarios
217(1)
Step 4: Individuality Evaluate Indirect Scenarios
218(1)
Step 5: Assess Scenario Interactions
218(1)
Step 6: Create the Overall Evaluation
219(1)
A Sample SAAM Agenda
220(2)
A SAAM Case Study
222(16)
ATAT System Overview
222(1)
Step 1: Develop Scenarios, First Iteration
223(1)
Step 2: Describe the Architecture(s), First Iteration
224(1)
Step 1: Develop Scenarios, Second Iteration
225(2)
Step 2: Describe the Architecture(s), Second Iteration
227(1)
Step 3: Classify and Prioritize the Scenarios
228(3)
Step 4: Individuality Evaluate Indirect Scenarios
231(4)
Step 5: Assess Scenario Interactions
235(1)
Step 6: Create the Overall Evaluation--Results and Recommendations
236(2)
Summary
238(1)
For Further Reading
239(1)
Discussion Questions
239(2)
ARID--An Evaluation Method for Partial Architectures
241(14)
Active Design Reviews
242(3)
ARID: An ADR/ATAM Hybrid
245(1)
The Steps of ARID
245(3)
Phase 1: Rehearsal
246(1)
Phase 2: Review
246(2)
A Case Study in Applying ARID
248(4)
Carrying Out the Steps
249(2)
Results of the Exercise
251(1)
Summary
252(1)
For Further Reading
252(1)
Discussion Questions
253(2)
Comparing Software Architecture Evaluation Methods
255(20)
Questioning Techniques
257(6)
Questionnaires and Checklists
257(4)
Scenarios and Scenario-Based Methods
261(2)
Measuring Techniques
263(4)
Metrics
264(1)
Simulations, Prototypes, and Experiments
265(1)
Rate-Monotonic Analysis
265(1)
Automated Tools and Architecture Description Languages
266(1)
Hybrid Techniques
267(4)
Software Performance Engineering
267(1)
The ATAM
268(3)
Summary
271(2)
For Further Reading
273(1)
Discussion Questions
273(2)
Growing an Architecture Evaluation Capability in Your Organization
275(12)
Growing Organizational Buy-in
276(1)
Growing a Pool of Evaluators
276(2)
Establishing a Corporate Memory
278(7)
Cost and Benefit Data
278(3)
Method Guidance
281(2)
Reusable Artifacts
283(2)
Summary
285(1)
Discussion Questions
285(2)
Conclusions
287(10)
You Are Now Ready!
287(1)
What Methods Have You Seen?
288(1)
Why Evaluate Architectures?
288(2)
Why Does the ATAM Work?
290(6)
A Parting Message
296(1)
Appendix A An Example Attribute-Based Architectural Style 297(6)
Problem Description
297(1)
Stimulus/Response
298(1)
Architectural Style
298(1)
Analysis
299(3)
Reasoning
299(1)
Priority Assignment
300(1)
Priority Inversion
301(1)
Blocking Time
301(1)
For Further Reading
302(1)
References 303(4)
Index 307


Paul Clements is a senior member of the technical staff at the SEI, where he works on software architecture and product line engineering. He is the author of five books and more than three dozen papers on these and other topics.

Rick Kazman is a senior member of the technical staff at the SEI. He is also an Associate Professor at the University of Hawaii. He is the author of two books, editor of two more, and has written more than seventy papers on software engineering and related topics.

Mark Klein is a senior member of the technical staff at the SEI. He is an adjunct professor in the Masters of Software Engineering program at Carnegie Mellon and a coauthor of A Practitioner's Handbook for Real-time Analysis: Guide to Rate Monotonic Analysis for Real-Time Systems (Kluwer Academic Publishers, 1993).





020170482XAB01162003