Signals And Systems 2Nd Edition Pdf | [Pdf] Solution Manual | Signals And Systems 2Nd Edition Oppenheim \U0026 Willsky 17997 좋은 평가 이 답변

당신은 주제를 찾고 있습니까 “signals and systems 2nd edition pdf – [PDF] Solution Manual | Signals and Systems 2nd Edition Oppenheim \u0026 Willsky“? 다음 카테고리의 웹사이트 you.tfvp.org 에서 귀하의 모든 질문에 답변해 드립니다: you.tfvp.org/blog. 바로 아래에서 답을 찾을 수 있습니다. 작성자 Michael Lenoir 이(가) 작성한 기사에는 조회수 793회 및 좋아요 1개 개의 좋아요가 있습니다.

Table of Contents

signals and systems 2nd edition pdf 주제에 대한 동영상 보기

여기에서 이 주제에 대한 비디오를 시청하십시오. 주의 깊게 살펴보고 읽고 있는 내용에 대한 피드백을 제공하세요!

d여기에서 [PDF] Solution Manual | Signals and Systems 2nd Edition Oppenheim \u0026 Willsky – signals and systems 2nd edition pdf 주제에 대한 세부정보를 참조하세요

Download here:
https://sites.google.com/view/booksaz/pdfsolution-manual-of-signals-and-systems

#SolutionsManuals #TestBanks #EngineeringBooks #EngineerBooks #EngineeringStudentBooks #MechanicalBooks #ScienceBooks

signals and systems 2nd edition pdf 주제에 대한 자세한 내용은 여기를 참조하세요.

Signals and Systems 2nd Edition(by Oppenheim)

ISBN 0-13-814757-4 l. System analysis. 2. Signal theory (Telecommunication) I. Willsky, Alan S. II. Nawab, Syed Ham. III. Title. IV.

+ 여기에 더 보기

Source: www.academia.edu

Date Published: 10/1/2021

View: 1402

Signals and Systems – GUC

2nd ed. p. cm. – Prentice-Hall signal processing series. Includes bibliographical references and index. ISBN 0-13-814757-4 l. System analysis. 2.

+ 여기에 보기

Source: eee.guc.edu.eg

Date Published: 5/16/2021

View: 7037

signals abd systems 2nd ed.pdf – PDFCOFFEE.COM

2nd ed. p. cm. – Prentice-Hall signal processing series Includes bibliographical references and index. ISBN 0-13-814757-4 l. System analysis. 2.

+ 여기에 보기

Source: pdfcoffee.com

Date Published: 3/18/2021

View: 5680

[PDF] Signals and Systems – Second Edition – askbooks.net

Signals and Systems – Second Edition – ALAN V. OPPENHEIM and ALAN S. WILLSKY with S. HAMID NAWAB. Advertisement. This book is the second edition …

+ 자세한 내용은 여기를 클릭하십시오

Source: www.askbooks.net

Date Published: 1/3/2022

View: 6115

[PDF] Signals and Systems [Alan V. Oppenheim … – DLSCRIB

Download Signals and Systems [Alan V. Oppenheim, Alan S. Willsky with S. Ham Nawab] [2nd Edition].pdf.

+ 여기를 클릭

Source: dlscrib.com

Date Published: 4/23/2022

View: 1452

Signals And Systems, 2nd Edition [PDF] – VDOC.PUB

Signals And Systems, 2nd Edition [PDF] [7h4hl19o34v0]. Design and MATLAB concepts have been integrated in text. * Integrates applications as it relates …

+ 여기에 더 보기

Source: vdoc.pub

Date Published: 5/24/2022

View: 8484

Signals and Systems (2nd Edition) | PDF – Scribd

Signals and Systems (2nd Edition) – Free ebook download as PDF File (.pdf) or read book online for free. Authors : Simon Haykin and Barry Van Veen.

+ 여기에 더 보기

Source: pt.scribd.com

Date Published: 8/18/2021

View: 4884

Signals And Systems 2nd Edition PDF Free Download

Signals and Systems 2nd edition PDF IS a book for graduate students interested in completely learning about signals and systems.

+ 더 읽기

Source: chemicalpdf.com

Date Published: 9/10/2021

View: 3248

电子工程系列丛书 信号与系统:英文第2版

SIGNALS &. SYSTEMS. SECOND EDITION. 信号与系统. 第2版. ALAN V. OPPENHEIM … 2版.-北京;清. 华大学出版社,1998.9. 【电子工程系列丛书). ISBN 7-302-03058-8.

+ 여기에 표시

Source: discourse-production.oss-cn-shanghai.aliyuncs.com

Date Published: 1/12/2022

View: 7695

(PDF) SIGNALS and SYSTEMS – ResearchGate

PDF | The major role of the signal is the communication in innumerable domains of … In book: Signals and Systems; Publisher: Belacademya.

+ 여기를 클릭

Source: www.researchgate.net

Date Published: 5/2/2021

View: 1070

주제와 관련된 이미지 signals and systems 2nd edition pdf

주제와 관련된 더 많은 사진을 참조하십시오 [PDF] Solution Manual | Signals and Systems 2nd Edition Oppenheim \u0026 Willsky. 댓글에서 더 많은 관련 이미지를 보거나 필요한 경우 더 많은 관련 기사를 볼 수 있습니다.

[PDF] Solution Manual | Signals and Systems 2nd Edition Oppenheim \u0026 Willsky
[PDF] Solution Manual | Signals and Systems 2nd Edition Oppenheim \u0026 Willsky

주제에 대한 기사 평가 signals and systems 2nd edition pdf

  • Author: Michael Lenoir
  • Views: 조회수 793회
  • Likes: 좋아요 1개
  • Date Published: 2020. 4. 1.
  • Video Url link: https://www.youtube.com/watch?v=l6hoPz5HJQ0

Signals and Systems 2nd Edition(by Oppenheim)

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.

signals abd systems 2nd ed.pdf

Citation preview

This page intentionally left blank

SIGNALS & SYSTEMS

PRENTICE HALL SIGNAL PROCESSING SERIES

Alan V. Oppenheim, Series Editor Digital Image Restoration Two Dimensional Imaging BRIGHAM The Fast Fourier Transform and Its Applications BuRDIC Underwater Acoustic System Analysis 2/E CASTLEMAN Digital Image Processing CoHEN Time-Frequency Analysis CROCHIERE & RABINER Multirate Digital Signal Processing DuDGEON & MERSEREAU Multidimensional Digital Signal Processing HAYKIN Advances in Spectrum Analysis and Array Processing. Vols. I, II & III HAYKIN, Eo. Array Signal Processing JoHNSON & DuDGEON Array Signal Processing KAY Fundamentals of Statistical Signal Processing KAY Modern Spectral Estimation KINO Acoustic Waves: Devices, Imaging, and Analog Signal Processing LIM Two-Dimensional Signal and Image Processing LIM, Eo. Speech Enhancement LIM & OPPENHEIM, Eos. Advanced Topics in Signal Processing MARPLE Digital Spectral Analysis with Applications MccLELLAN & RADER Number Theory in Digital Signal Processing MENDEL Lessons in Estimation Theory for Signal Processing Communications and Control 2/E NIKIAS & PETROPULU Higher Order Spectra Analysis OPPENHEIM & NAWAB Symbolic and Knowledge-Based Signal Processing OPPENHEIM & WILLSKY, WITH NAWAB Signals and Systems, 2/E OPPENHEIM & ScHAFER Digital Signal Processing OPPENHEIM & ScHAFER Discrete-Time Signal Processing 0RFANIDIS Signal Processing PHILLIPS & NAGLE Digital Control Systems Analysis and Design, 3/E PICINBONO Random Signals and Systems RABINER & GoLD Theory and Applications of Digital Signal Processing RABINER & SCHAFER Digital Processing of Speech Signals RABINER & JuANG Fundamentals of Speech Recognition RoBINSON & TREITEL Geophysical Signal Analysis STEARNS & DAVID Signal Processing Algorithms in Fortran and C STEARNS & DAVID Signal Processing Algorithms in MATIAB TEKALP Digital Video Processing THERRIEN Discrete Random Signals and Statistical Signal Processing TRIBOLET Seismic Applications of Homomorphic Signal Processing VETTERLI & KovACEVIC Wavelets and Subband Coding VIADYANATHAN Multirate Systems and Filter Banks WIDROW & STEARNS Adaptive Signal Processing ANDREWS

&

BRACEWELL

HUNT

SECOND EDITION

SIGNALS & SYSTEMS ALAN V. OPPENHEIM ALAN S. WILLSKY MASSACHUSETTS INSTITUTE OF TECHNOLOGY WITH

S. HAMID NAWAB BOSTON UNIVERSITY

PRENTICE HALL UPPER SADDLE RIVER, NEW JERSEY 07458

Library of Congress Cataloging-in-Publication Data Oppenheim, Alan V. Signals and systems / Alan V. Oppenheim, Alan S. Willsky, with S. Hamid Nawab. – 2nd ed. p. cm. – Prentice-Hall signal processing series Includes bibliographical references and index. ISBN 0-13-814757-4 l. System analysis. 2. Signal theory (Telecommunication) I. Willsky, Alan S. II. Nawab, Syed Hamid. III. Title. IV. Series QA402.063 1996 621.382’23–dc20 96-19945 CIP

Acquisitions editor: Tom Robbins Production service: TKM Productions Editorial/production supervision: Sharyn Vitrano Copy editor: Brian Baker Interior and cover design: Patrice Van Acker Art director: Amy Rosen Managing editor: Bayani Mendoza DeLeon Editor-in-Chief: Marcia Horton Director of production and manufacturing: David W. Riccardi Manufacturing buyer: Donna Sullivan Editorial assistant: Phyllis Morgan © 1997 by Alan V. Oppenheim and Alan S. Willsky © 1983 by Alan V. Oppenheim, Alan S. Willsky, and Ian T. Young Published by Prentice-Hall, Inc. Simon & Schuster / A Viacom Company Upper Saddle River, New Jersey 07458

Printed in the United States of America 10

9

8

7

6

5

4

ISBN 0-13–814757–4 Prentice-Hall International (UK) Limited, London Prentice-Hall of Australia Pty. Limited, Sydney Prentice-Hall Canada Inc., Toronto Prentice-Hall Hispanoamericana, S.A., Mexico Prentice-Hall of India Private Limited, New Delhi Prentice-Hall of Japan, Inc., Tokyo Simon & Schuster Asia Pte. Ltd., Singapore Editora Prentice-Hall do Brasil, Ltda., Rio de Janeiro

To Phyllis, Jason, and Justine To Susanna, Lydia, and Kate

CONTENTS PREFACE

XVII

ACKNOWLEDGEMENTS FOREWORD

XXV

XXVII

1 SIGNALS AND SYSTEMS 1 1.0 1.1

1.2

1.3

1.4

1.5

1.6

1.7

Introduction 1 Continuous-Time and Discrete-Time Signals 1 1.1.1 Examples and Mathematical Representation 1 1.1.2 Signal Energy and Power 5 Transformations of the Independent Variable 7 1.2.1 Examples of Transformations of the Independent Variable 8 1.2.2 Periodic Signals 11 1.2.3 Even and Odd Signals 13 Exponential and Sinusoidal Signals 14 1.3.1 Continuous-Time Complex Exponential and Sinusoidal Signals 15 1.3.2 Discrete-Time Complex Exponential and Sinusoidal Signals 21 1.3.3 Periodicity Properties of Discrete-Time Complex Exponentials 25 The Unit Impulse and Unit Step Functions 30 1.4.1 The Discrete-Time Unit Impulse and Unit Step Sequences 30 1.4.2 The Continuous-Time Unit Step and Unit Impulse Functions 32 Continuous-Time and Discrete-Time Systems 38 1.5.1 Simple Examples of Systems 39 1.5.2 Interconnections of Systems 41 Basic System Properties 44 1.6.1 Systems with and without Memory 44 1.6.2 Invertibility and Inverse Systems 45 1.6.3 Causality 46 1.6.4 Stability 48 1.6.5 Time Invariance 50 1.6.6 Linearity 53 Summary 56 Problems 57

2 LINEAR TIME-INVARIANT SYSTEMS 74 2.0 2.1

Introduction 74 Discrete-Time LTI Systems: The Convolution Sum 75 vii

viii

Contents

2.1.1

2.2

2.3

2.4

2.5

2.6

The Representation of Discrete-Time Signals in Terms of Impulses 75 2.1.2 The Discrete-Time Unit Impulse Response and the Convolution-Sum Representation of LTI Systems 77 Continuous-Time LTI Systems: The Convolution Integral 90 2.2.1 The Representation of Continuous-Time Signals in Terms of Impulses 90 2.2.2 The Continuous-Time Unit Impulse Response and the Convolution Integral Representation of LTI Systems 94 Properties of Linear Time-Invariant Systems 103 2.3.1 The Commutative Property 104 2.3.2 The Distributive Property 104 2.3.3 The Associative Property 107 2.3.4 LTI Systems with and without Memory 108 2.3.5 Invertibility of LTI Systems 109 2.3.6 Causality for LTI Systems 112 2.3.7 Stability for LTI Systems 113 2.3.8 The Unit Step Response of an LTI System 115 Causal LTI Systems Described by Differential and Difference Equations 116 2.4.1 Linear Constant -Coefficient Differential Equations 117 2.4.2 Linear Constant-Coefficient Difference Equations 121 2.4.3 Block Diagram Representations of First-Order Systems Described by Differential and Difference Equations 124 Singularity Functions 127 2.5.1 The Unit Impulse as an Idealized Short Pulse 128 2.5.2 Defining the Unit Impulse through Convolution 131 2.5.3 Unit Doublets and Other Singularity Functions 132 Summary 137 Problems 137

3 FOURIER SERIES REPRESENTATION OF PERIODIC SIGNALS 3.0 3.1 3.2 3.3

3.4 3.5

177

Introduction 177 A Historical Perspective 178 The Response of LTI Systems to Complex Exponentials 182 Fourier Series Representation of Continuous-Time Periodic Signals 186 3.3.1 Linear Combinations of Harmonically Related Complex Exponentials 186 3.3.2 Determination of the Fourier Series Representation of a Continuous-Time Periodic Signal 190 Convergence of the Fourier Series 195 Properties of Continuous-Time Fourier Series 202 3.5.1 Linearity 202

Contents

3.5.2 3.5.3 3.5.4 3.5.5 3.5.6

3.6

3.7

3.8 3.9

3.10

3.11

3.12

4

Time Shifting 202 Time Reversal 203 Time Scaling 204 Multiplication 204 Conjugation and Conjugate Symmetry 204 Parseval’s Relation for Continuous-Time Periodic Signals 3.5.7 205 3.5.8 Summary of Properties of the Continuous-Time Fourier Series 205 3.5.9 Examples 205 Fourier Series Representation of Discrete-Time Periodic Signal 211 3.6.1 Linear Combinations of Harmonically Related Complex Exponentials 211 3.6.2 Determination of the Fourier Series Representation of a Periodic Signal 212 Properties of Discrete-Time Fourier Series 221 3.7.1 Multiplication 222 3.7.2 First Difference 222 3.7.3 Parseval’s Relation for Discrete-Time Periodic Signals 223 3.7.4 Examples 223 Fourier Series and LTI Systems 226 Filtering 231 3.9.1 Frequency-Shaping Filters 232 3.9.2 Frequency-Selective Filters 236 Examples of Continuous-Time Filters Described by Differential Equations 239 3.10.1 A Simple RC Lowpass Filter 239 3.10.2 A Simple RC Highpass Filter 241 Examples of Discrete-Time Filters Described by Difference Equations 244 3.11.1 First-Order Recursive Discrete-Time Filters 244 3.11.2 Nonrecursive Discrete-Time Filters 245 Summary 249 Problems 250

THE CONTINUOUS-TIME FOURIER TRANSFORM 284 4.0 4.1

4.2 4.3

Introduction 284 Representation of Aperiodic Signals: The Continuous-Time Fourier Transform 285 4.1.1 Development of the Fourier Transform Representation of an Aperiodic Signal 285 4.1.2 Convergence of Fourier Transforms 289 4.1.3 Examples of Continuous-Time Fourier Transforms 290 The Fourier Transform for Periodic Signals 296 Properties of the Continuous-Time Fourier Transform 300 4.3.1 Linearity 301

ix

x

4.4 4.5 4.6 4.7 4.8

4.3.2 Time Shifting 301 4.3.3 Conjugation and Conjugate Symmetry 303 4.3.4 Differentiation and Integration 306 4.3.5 Time and Frequency Scaling 308 4.3.6 Duality 309 4.3.7 Parseval’s Relation 312 The Convolution Property 314 4.4.1 Examples 317 The Multiplication Property 322 4.5.1 Frequency-Selective Filtering with Variable Center Frequency 325 Tables of Fourier Properties and of Basic Fourier Transform Pairs 328 Systems Characterized by Linear Constant-Coefficient Differential Equations 330 Summary 333 Problems 334

5 THE DISCRETE-TIME FOURIER TRANSFORM 358 5.0 5.1

5.2 5.3

5.4 5.5 5.6 5.7

Introduction 358 Representation of Aperiodic Signals: The Discrete-Time Fourier Transform 359 5.1.1 Development of the Discrete-Time Fourier Transform 359 5.1.2 Examples of Discrete-Time Fourier Transforms 362 5.1.3 Convergence Issues Associated with the Discrete-Time Fourier Transform 366 The Fourier Transform for Periodic Signals 367 Properties of the Discrete-Time Fourier Transform 372 5.3.1 Periodicity of the Discrete-Time Fourier Transform 373 5.3.2 Linearity of the Fourier Transform 373 5.3.3 Time Shifting and Frequency Shifting 373 5.3.4 Conjugation and Conjugate Symmetry 375 5.3.5 Differencing and Accumulation 375 5.3.6 Time Reversal 376 5.3.7 Time Expansion 377 5.3.8 Differentiation in Frequency 380 5.3.9 Parseval’s Relation 380 The Convolution Property 382 5.4.1 Examples 383 The Multiplication Property 388 Tables of Fourier Transform Properties and Basic Fourier Transform Pairs 390 Duality 390 5.7.1 Duality in the Discrete-Time Fourier Series 391 5.7.2 Duality between the Discrete-Time Fourier Transform and the Continuous-Time Fourier Series 395

Contents

xi

Contents

5.8 5.9

Systems Characterized by Linear Constant-Coefficient Difference Equations 396 Summary 399 Problems 400

6 TIME AND FREQUENCY CHARACTERIZATION OF SIGNALS AND SYSTEMS 6.0 6.1 6.2

6.3 6.4 6.5

6.6

6.7

6.8

423

Introduction 423 The Magnitude-Phase Representation of the Fourier Transform 423 The Magnitude-Phase Representation of the Frequency Response of LTI Systems 427 6.2.1 Linear and Nonlinear Phase 428 6.2.2 Group Delay 430 6.2.3 Log-Magnitude and Bode Plots 436 Time-Domain Properties of Ideal Frequency-Selective Filters 439 Time-Domain and Frequency-Domain Aspects of Nonideal Filters 444 First-Order and Second-Order Continuous-Time Systems 448 6.5.1 First-Order Continuous-Time Systems 448 6.5.2 Second-Order Continuous-Time Systems 451 6.5.3 Bode Plots for Rational Frequency Responses 456 First-Order and Second-Order Discrete-Time Systems 461 6.6.1 First-Order Discrete-Time Systems 461 6.6.2 Second-Order Discrete-Time Systems 465 Examples of Time- and Frequency-Domain Analysis of Systems 472 6.7.1 Analysis of an Automobile Suspension System 473 6.7.2 Examples of Discrete-Time Nonrecursive Filter 476 Summary 482 Problems 483

7 SAMPLING 514 7.0 7.1

7.2 7.3 7.4

Introduction 514 Representation of a Continuous-Time Signal by Its Samples: The Sampling Theorem 515 7.1.1 Impulse-Train Sampling 516 7.1.2 Sampling with a Zero-Order Hold 520 Reconstruction of a Signal from Its Samples Using Interpolation 522 The Effect of Undersampling: Aliasing 527 Discrete-Time Processing of Continuous-Time Signals 534 7.4.1 Digital Differentiator 541 7.4.2 Half-Sample Delay 543

xii

7.5

7.6

Sampling of Discrete-Time Signals 545 7.5.1 Impulse-Train Sampling 545 7.5.2 Discrete-Time Decimation and Interpolation 549 Summary 555 Problems 556

8 COMMUNICATION SYSTEMS 582 8.0 8.1

8.2

8.3 8.4 8.5

8.6

8.7

8.8

8.9

Introduction 582 Complex Exponential and Sinusoidal Amplitude Modulation 583 8.1.1 Amplitude Modulation with a Complex Exponential Carrier 583 8.1.2 Amplitude Modulation with a Sinusoidal Carrier 585 Demodulation for Sinusoidal AM 587 8.2.1 Synchronous Demodulation 587 8.2.2 Asynchronous Demodulation 590 Frequency-Division Multiplexing 594 Single-Sideband Sinusoidal Amplitude Modulation 597 Amplitude Modulation with a Pulse-Train Carrier 601 8.5.1 Modulation of a Pulse-Train Carrier 601 8.5.2 Time-Division Multiplexing 604 Pulse-Amplitude Modulation 604 8.6.1 Pulse-Amplitude Modulated Signals 604 8.6.2 Intersymbol Interference in PAM Systems 607 8.6.3 Digital Pulse-Amplitude and Pulse-Code Modulation 610 Sinusoidal Frequency Modulation 611 8.7.1 Narrowband Frequency Modulation 613 8.7.2 Wideband Frequency Modulation 615 8.7.3 Periodic Square-Wave Modulating Signal 617 Discrete-Time Modulation 619 8.8.1 Discrete-Time Sinusoidal Amplitude Modulation 619 8.8.2 Discrete-Time Transmodulation 623 Summary 623 Problems 625

9 THE LAPLACE TRANSFORM 654 9.0 9.1 9.2 9.3 9.4

9.5

Introduction 654 The Laplace Transform 655 The Region of Convergence for Laplace Transforms 662 The Inverse Laplace Transform 670 Geometric Evaluation of the Fourier Transform from the Pole-Zero Plot 674 9.4.1 First-Order Systems 676 9.4.2 Second-Order Systems 677 9.4.3 All-Pass Systems 681 Properties of the Laplace Transform 682 9.5.1 Linearity of the Laplace Transform 683 9.5.2 Time Shifting 684

Contents

Contents

9.6 9.7

9.8

9.9

9.10

9.5.3 Shifting in the s-Domain 685 9.5.4 Time Scaling 685 9.5.5 Conjugation 687 9.5.6 Convolution Property 687 9.5.7 Differentiation in the Time Domain 688 9.5.8 Differentiation in the s-Domain 688 9.5.9 Integration in the Time Domain 690 9.5.10 The Initial- and Final-Value Theorems 690 9.5.11 Table of Properties 691 Some Laplace Transform Pairs 692 Analysis and Characterization of LTI Systems Using the Laplace Transform 693 9.7.1 Causality 693 9.7.2 Stability 695 9.7.3 LTI Systems Characterized by Linear Constant-Coefficient Differential Equations 698 9.7.4 Examples Relating System Behavior to the System Function 701 9.7.5 Butterworth Filters 703 System Function Algebra and Block Diagram Representations 706 9.8.1 System Functions for Interconnections of LTI Systems 707 9.8.2 Block Diagram Representations for Causal LTI Systems Described by Differential Equations and Rational System Functions 708 The Unilateral Laplace Transform 714 9.9.1 Examples of Unilateral Laplace Transforms 714 9.9.2 Properties of the Unilateral Laplace Transform 716 9.9.3 Solving Differential Equations Using the Unilateral Laplace Transform 719 Summary 720 Problems 721

10 THE Z-TRANSFORM 741 10.0 10.1 10.2 10.3 10.4

10.5

Introduction 741 The z-Transform 741 The Region of Convergence for the z-Transform 748 The Inverse z-Transform 757 Geometric Evaluation of the Fourier Transform from the Pole-Zero Plot 763 10.4.1 First-Order Systems 763 10.4.2 Second-Order Systems 765 Properties of the z-Transform 767 10.5.1 Linearity 767 10.5.2 Time Shifting 767 10.5.3 Scaling in the z-Domain 768 10.5.4 Time Reversal 769 10.5.5 Time Expansion 769

xiii

xiv

10.6 10.7

10.8

10.9

10.10

10.5.6 Conjugation 770 10.5.7 The Convolution Property 770 10.5.8 Differentiation in the z-Domain 772 10.5.9 The Initial-Value Theorem 773 10.5.10 Summary of Properties 774 Some Common z-Transform Pairs 774 Analysis and Characterization of LTI Systems Using z-Transforms 774 10.7.1 Causality 776 10.7.2 Stability 777 10.7.3 LTI Systems Characterized by Linear Constant-Coefficient Difference Equations 779 10.7.4 Examples Relating System Behavior to the System Function 781 System Function Algebra and Block Diagram Representations 783 10.8.1 System Functions for Interconnections of LTI Systems 784 10.8.2 Block Diagram Representations for Causal LTI Systems Described by Difference Equations and Rational System Functions 784 The Unilateral z-Transform 789 10.9 .1 Examples of Unilateral z-Transforms and Inverse Transforms 790 10.9 .2 Properties of the Unilateral z-Transform 792 10.9 .3 Solving Difference Equations Using the Unilateral z-Transform 795 Summary 796 Problems 797

11 LINEAR FEEDBACK SYSTEMS 816 11.0 11.1 11.2

11.3

11.4

Introduction 816 Linear Feedback Systems 819 Some Applications and Consequences of Feedback 820 11.2.1 Inverse System Design 820 11.2.2 Compensation for Nonideal Elements 821 11.2.3 Stabilization of Unstable Systems 823 11.2.4 Sampled-Data Feedback Systems 826 11.2.5 Tracking Systems 828 11.2.6 Destabilization Caused by Feedback 830 Root-Locus Analysis of Linear Feedback Systems 832 11.3.1 An Introductory Example 833 11.3.2 Equation for the Closed-Loop Poles 834 11.3.3 The End Points of the Root Locus: The Closed-Loop Poles for K = 0 and |K| = +∞ 836 11.3.4 The Angle Criterion 836 11.3.5 Properties of the Root Locus 841 The Nyquist Stability Criterion 846 11.4.1 The Encirclement Property 847

Contents

Contents

11.4.2

11.5 11.6

The Nyquist Criterion for Continuous-Time LTI Feedback Systems 850 11.4.3 The Nyquist Criterion for Discrete-Time LTI Feedback Systems 856 Gain and Phase Margins 858 Summary 866 Problems 867

APPENDIX PARTIAL-FRACTION EXPANSION 909 BIBLIOGRAPHY 921 ANSWERS 931 INDEX 941

xv

This page intentionally left blank

PREFACE This book is the second edition of a text designed for undergraduate courses in signals and systems. While such courses are frequently found in electrical engineering curricula, the concepts and techniques that form the core of the subject are of fundamental importance in all engineering disciplines. In fact, the scope of potential and actual applications of the methods of signal and system analysis continues to expand as engineers are confronted with new challenges involving the synthesis or analysis of complex processes. For these reasons we feel that a course in signals and systems not only is an essential element in an engineering program but also can be one of the most rewarding, exciting, and useful courses that engineering students take during their undergraduate education. Our treatment of the subject of signals and systems in this second edition maintains the same general philosophy as in the first edition but with significant rewriting, restructuring, and additions. These changes are designed to help both the instructor in presenting the subject material and the student in mastering it. In the preface to the first edition we stated that our overall approach to signals and systems had been guided by the continuing developments in technologies for signal and system design and implementation, which made it increasingly important for a student to have equal familiarity with techniques suitable for analyzing and synthesizing both continuous-time and discrete-time systems. As we write the preface to this second edition, that observation and guiding principle are even more true than before. Thus, while students studying signals and systems should certainly have a solid foundation in disciplines based on the laws of physics, they must also have a firm grounding in the use of computers for the analysis of phenomena and the implementation of systems and algorithms. As a consequence, engineering curricula now reflect a blend of subjects, some involving continuous-time models and others focusing on the use of computers and discrete representations. For these reasons, signals and systems courses that bring discretetime and continuous-time concepts together in a unified way play an increasingly important role in the education of engineering students and in their preparation for current and future developments in their chosen fields. It is with these goals in mind that we have structured this book to develop in parallel the methods of analysis for continuous-time and discrete-time signals and systems. This approach also offers a distinct and extremely important pedagogical advantage. Specifically, we are able to draw on the similarities between continuous- and discrete-time methods in order to share insights and intuition developed in each domain. Similarly, we can exploit the differences between them to sharpen an understanding of the distinct properties of each. In organizing the material both originally and now in the second edition, we have also considered it essential to introduce the student to some of the important uses of the basic methods that are developed in the book. Not only does this provide the student with an appreciation for the range of applications of the techniques being learned and for directions for further study, but it also helps to deepen understanding of the subject. To achieve this xvii

xviii

Preface

goal we include introductory treatments on the subjects of filtering, communications, sampling, discrete-time processing of continuous-time signals, and feedback. In fact, in one of the major changes in this second edition, we have introduced the concept of frequencydomain filtering very early in our treatment of Fourier analysis in order to provide both motivation for and insight into this very important topic. In addition, we have again included an up-to-date bibliography at the end of the book in order to assist the student who is interested in pursuing additional and more advanced studies of the methods and applications of signal and system analysis. The organization of the book reflects our conviction that full mastery of a subject of this nature cannot be accomplished without a significant amount of practice in using and applying the tools that are developed. Consequently, in the second edition we have significantly increased the number of worked examples within each chapter. We have also enhanced one of the key assets of the first edition, namely the end-of-chapter homework problems. As in the first edition, we have included a substantial number of problems, totaling more than 600 in number. A majority of the problems included here are new and thus provide additional flexibility for the instructor in preparing homework assignments. In addition, in order to enhance the utility of the problems for both the student and the instructor we have made a number of other changes to the organization and presentation of the problems. In particular, we have organized the problems in each chapter under several specific headings, each of which spans the material in the entire chapter but with a different objective. The first two sections of problems in each chapter emphasize the mechanics of using the basic concepts and methods presented in the chapter. For the first of these two sections, which has the heading Basic Problems with Answers, we have also provided answers (but not solutions) at the end of the book. These answers provide a simple and immediate way for the student to check his or her understanding of the material. The problems in this first section are generally appropriate for inclusion in homework sets. Also, in order to give the instructor additional flexibility in assigning homework problems, we have provided a second section of Basic Problems for which answers have not been included. A third section of problems in each chapter, organized under the heading of Advanced Problems, is oriented toward exploring and elaborating upon the foundations and practical implications of the material in the text. These problems often involve mathematical derivations and more sophisticated use of the concepts and methods presented in the chapter. Some chapters also include a section of Extension Problems which involve extensions of material presented in the chapter and/or involve the use of knowledge from applications that are outside the scope of the main text (such as advanced circuits or mechanical systems). The overall variety and quantity of problems in each chapter will hopefully provide students with the means to develop their understanding of the material and instructors with considerable flexibility in putting together homework sets that are tailored to the specific needs of their students. A solutions manual is also available to instructors through the publisher. Another significant additional enhancement to this second edition is the availability of the companion book Explorations in Signals and Systems Using MATLAB by Buck, Daniel, and Singer. This book contains MATLAB™-based computer exercises for each topic in the text, and should be of great assistance to both instructor and student.

Preface

xix

Students using this book are assumed to have a basic background in calculus as well as some experience in manipulating complex numbers and some exposure to differential equations. With this background, the book is self-contained. In particular, no prior experience with system analysis, convolution, Fourier analysis, or Laplace and z-transforms is assumed. Prior to learning the subject of signals and systems most students will have had a course such as basic circuit theory for electrical engineers or fundamentals of dynamics for mechanical engineers. Such subjects touch on some of the basic ideas that are developed more fully in this text. This background can clearly be of great value to students in providing additional perspective as they proceed through the book. The Foreword, which follows this preface, is written to offer the reader motivation and perspective for the subject of signals and systems in general and our treatment of it in particular. We begin Chapter 1 by introducing some of the elementary ideas related to the mathematical representation of signals and systems. In particular we discuss transformations (such as time shifts and scaling) of the independent variable of a signal. We also introduce some of the most important and basic continuous-time and discrete-time signals, namely real and complex exponentials and the continuous-time and discrete-time unit step and unit impulse. Chapter 1 also introduces block diagram representations of interconnections of systems and discusses several basic system properties such as causality, linearity and time-invariance. In Chapter 2 we build on these last two properties, together with the sifting property of unit impulses to develop the convolution-sum representation for discretetime linear, time-invariant (LTI) systems and the convolution integral representation for continuous-time LTI systems. In this treatment we use the intuition gained from our development of the discrete-time case as an aid in deriving and understanding its continuoustime counterpart. We then turn to a discussion of causal, LTI systems characterized by linear constant-coefficient differential and difference equations. In this introductory discussion we review the basic ideas involved in solving linear differential equations (to which most students will have had some previous exposure) and we also provide a discussion of analogous methods for linear difference equations. However, the primary focus of our development in Chapter 2 is not on methods of solution, since more convenient approaches are developed later using transform methods. Instead, in this first look, our intent is to provide the student with some appreciation for these extremely important classes of systems, which will be encountered often in subsequent chapters. Finally, Chapter 2 concludes with a brief discussion of singularity functions—steps, impulses, doublets, and so forth—in the context of their role in the description and analysis of continuous-time LTI systems. In particular, we stress the interpretation of these signals in terms of how they are defined under convolution—that is, in terms of the responses of LTI systems to these idealized signals. Chapters 3 through 6 present a thorough and self-contained development of the methods of Fourier analysis in both continuous and discrete time and together represent the most significant reorganization and revision in the second edition. In particular, as we indicated previously, we have introduced the concept of frequency-domain filtering at a much earlier point in the development in order to provide motivation for and a concrete application of the Fourier methods being developed. As in the first edition, we begin the discussions in Chapter 3 by emphasizing and illustrating the two fundamental reasons for the important

xx

Preface

role Fourier analysis plays in the study of signals and systems in both continuous and discrete time: (1) extremely broad classes of signals can be represented as weighted sums or integrals of complex exponentials; and (2) the response of an LTI system to a complex exponential input is the same exponential multiplied by a complex-number characteristic of the system. However, in contrast to the first edition, the focus of attention in Chapter 3 is on Fourier series representations for periodic signals in both continuous time and discrete time. In this way we not only introduce and examine many of the properties of Fourier representations without the additional mathematical generalization required to obtain the Fourier transform for aperiodic signals, but we also can introduce the application to filtering at a very early stage in the development. In particular, taking advantage of the fact that complex exponentials are eigenfunctions of LTI systems, we introduce the frequency response of an LTI system and use it to discuss the concept of frequency-selective filtering, to introduce ideal filters, and to give several examples of nonideal filters described by differential and difference equations. In this way, with a minimum of mathematical preliminaries, we provide the student with a deeper appreciation for what a Fourier representation means and why it is such a useful construct. Chapters 4 and 5 then build on the foundation provided by Chapter 3 as we develop first the continuous-time Fourier transform in Chapter 4 and, in a parallel fashion, the discretetime Fourier transform in Chapter 5. In both chapters we derive the Fourier transform representation of an aperiodic signal as the limit of the Fourier series for a signal whose period becomes arbitrarily large. This perspective emphasizes the close relationship between Fourier series and transforms, which we develop further in subsequent sections and which allows us to transfer the intuition developed for Fourier series in Chapter 3 to the more general context of Fourier transforms. In both chapters we have included a discussion of the many important properties of Fourier transforms, with special emphasis placed on the convolution and multiplication properties. In particular, the convolution property allows us to take a second look at the topic of frequency-selective filtering, while the multiplication property serves as the starting point for our treatment of sampling and modulation in later chapters. Finally, in the last sections in Chapters 4 and 5 we use transform methods to determine the frequency responses of LTI systems described by differential and difference equations and to provide several examples illustrating how Fourier transforms can be used to compute the responses for such systems. To supplement these discussions (and later treatments of Laplace and z-transforms) we have again included an Appendix at the end of the book that contains a description of the method of partial fraction expansion. Our treatment of Fourier analysis in these two chapters is characteristic of the parallel treatment we have developed. Specifically, in our discussion in Chapter 5, we are able to build on much of the insight developed in Chapter 4 for the continuous-time case, and toward the end of Chapter 5 we emphasize the complete duality in continuous-time and discrete-time Fourier representations. In addition, we bring the special nature of each domain into sharper focus by contrasting the differences between continuous- and discrete-time Fourier analysis. As those familiar with the first edition will note, the lengths and scopes of Chapters 4 and 5 in the second edition are considerably smaller than their first edition counterparts. This is due not only to the fact that Fourier series are now dealt with in a separate chapter but also to our moving several topics into Chapter 6. The result, we believe, has several

Preface

xxi

significant benefits. First, the presentation in three shorter chapters of the basic concepts and results of Fourier analysis, together with the introduction of the concept of frequencyselective filtering, should help the student in organizing his or her understanding of this material and in developing some intuition about the frequency domain and appreciation for its potential applications. Then, with Chapters 3-5 as a foundation, we can engage in a more detailed look at a number of important topics and applications. In Chapter 6 we take a deeper look at both the time- and frequency-domain characteristics of LTI systems. For example, we introduce magnitude-phase and Bode plot representations for frequency responses and discuss the effect of frequency response phase on the time domain characteristics of the output of an LTI system. In addition, we examine the time- and frequency-domain behavior of ideal and nonideal filters and the tradeoffs between these that must be addressed in practice. We also take a careful look at first- and second-order systems and their roles as basic building blocks for more complex system synthesis and analysis in both continuous and discrete time. Finally, we discuss several other more complex examples of filters in both continuous and discrete time. These examples together with the numerous other aspects of filtering explored in the problems at the end of the chapter provide the student with some appreciation for the richness and flavor of this important subject. While each of the topics in Chapter 6 was present in the first edition, we believe that by reorganizing and collecting them in a separate chapter following the basic development of Fourier analysis, we have both simplified the introduction of this important topic in Chapters 3-5 and presented in Chapter 6 a considerably more cohesive picture of time- and frequency-domain issues. In response to suggestions and preferences expressed by many users of the first edition we have modified notation in the discussion of Fourier transforms to be more consistent with notation most typically used for continuous-time and discrete-time Fourier transforms. Specifically, beginning with Chapter 3 we now denote the continuous-time Fourier transform as X( jω ) and the discrete-time Fourier transform as X(e jω). As with all options with notation, there is not a unique best choice for the notation for Fourier transforms. However, it is our feeling, and that of many of our colleagues, that the notation used in this edition represents the preferable choice. Our treatment of sampling in Chapter 7 is concerned primarily with the sampling theorem and its implications. However, to place this subject in perspective we begin by discussing the general concepts of representing a continuous-time signal in terms of its samples and the reconstruction of signals using interpolation. After using frequency-domain methods to derive the sampling theorem, we consider both the frequency and time domains to provide intuition concerning the phenomenon of aliasing resulting from undersampling. One of the very important uses of sampling is in the discrete-time processing of continuoustime signals, a topic that we explore at some length in this chapter. Following this, we turn to the sampling of discrete-time signals. The basic result underlying discrete-time sampling is developed in a manner that parallels that used in continuous time, and the applications of this result to problems of decimation and interpolation are described. Again a variety of other applications, in both continuous and discrete time, are addressed in the problems. Once again the reader acquainted with our first edition will note a change, in this case involving the reversal in the order of the presentation of sampling and communications. We have chosen to place sampling before communications in the second edition both because

xxii

Preface

we can call on simple intuition to motivate and describe the processes of sampling and reconstruction from samples and also because this order of presentation then allows us in Chapter 8 to talk more easily about forms of communication systems that are closely related to sampling or rely fundamentally on using a sampled version of the signal to be transmitted. Our treatment of communications in Chapter 8 includes an in -depth discussion of continuous-time sinusoidal amplitude modulation (AM), which begins with the straightforward application of the multiplication property to describe the effect of sinusoidal AM in the frequency domain and to suggest how the original modulating signal can be recovered. Following this, we develop a number of additional issues and applications related to sinusoidal modulation, including frequency-division multiplexing and single-sideband modulation. Many other examples and applications are described in the problems. Several additional topics are covered in Chapter 8. The first of these is amplitude modulation of a pulse train and time-division multiplexing, which has a close connection to the topic of sampling in Chapter 7. Indeed we make this tie even more explicit and provide a look into the important field of digital communications by introducing and briefly describing the topics of pulseamplitude modulation (PAM) and intersymbol interference. Finally, our discussion of frequency modulation (FM) provides the reader with a look at a nonlinear modulation problem. Although the analysis of FM systems is not as straightforward as for the AM case, our introductory treatment indicates how frequency-domain methods can be used to gain a significant amount of insight into the characteristics of FM signals and systems. Through these discussions and the many other aspects of modulation and communications explored in the problems in this chapter we believe that the student can gain an appreciation both for the richness of the field of communications and for the central role that the tools of signals and systems analysis play in it. Chapters 9 and 10 treat the Laplace and z-transforms, respectively. For the most part, we focus on the bilateral versions of these transforms, although in the last section of each chapter we discuss unilateral transforms and their use in solving differential and difference equations with nonzero initial conditions. Both chapters include discussions on: the close relationship between these transforms and Fourier transforms; the class of rational transforms and their representation in terms of poles and zeros; the region of convergence of a Laplace or z-transform and its relationship to properties of the signal with which it is associated; inverse transforms using partial fraction expansion; the geometric evaluation of system functions and frequency responses from pole-zero plots; and basic transform properties. In addition, in each chapter we examine the properties and uses of system functions for LTI systems. Included in these discussions are the determination of system functions for systems characterized by differential and difference equations; the use of system function algebra for interconnections of LTI systems; and the construction of cascade, parallel- and directform block-diagram representations for systems with rational system functions. The tools of Laplace and z-transforms form the basis for our examination of linear feedback systems in Chapter 11. We begin this chapter by describing a number of the important uses and properties of feedback systems, including stabilizing unstable systems, designing tracking systems, and reducing system sensitivity. In subsequent sections we use the tools that we have developed in previous chapters to examine three topics that are of importance for both continuous-time and discrete-time feedback systems. These are root locus analysis,

Preface

xxiii

Nyquist plots and the Nyquist criterion, and log-magnitude/phase plots and the concepts of phase and gain margins for stable feedback systems. The subject of signals and systems is an extraordinarily rich one, and a variety of approaches can be taken in designing an introductory course. It was our intention with the first edition and again with this second edition to provide instructors with a great deal of flexibility in structuring their presentations of the subject. To obtain this flexibility and to maximize the usefulness of this book for instructors, we have chosen to present thorough, indepth treatments of a cohesive set of topics that forms the core of most introductory courses on signals and systems. In achieving this depth we have of necessity omitted introductions to topics such as descriptions of random signals and state space models that are sometimes included in first courses on signals and systems. Traditionally, at many schools, such topics are not included in introductory courses but rather are developed in more depth in followon undergraduate courses or in courses explicitly devoted to their investigation. Although we have not included an introduction to state space in the book, instructors of introductory courses can easily incorporate it into the treatments of differential and difference equations that can be found throughout the book. In particular, the discussions in Chapters 9 and I 0 on block diagram representations for systems with rational system functions and on unilateral transforms and their use in solving differential and difference equations with initial conditions form natural points of departure for the discussions of state-space representations. A typical one-semester course at the sophomore-junior level using this book would cover Chapters 1-5 in reasonable depth (although various topics in each chapter are easily omitted at the discretion of the instructor) with selected topics chosen from the remaining chapters. For example, one possibility is to present several of the basic topics in Chapters 6-8 together with a treatment of Laplace and z-transforms and perhaps a brief introduction to the use of system function concepts to analyze feedback systems. A variety of alternate formats are possible, including one that incorporates an introduction to state space or one in which more focus is placed on continuous-time systems by de-emphasizing Chapters 5 and 10 and the discrete-time topics in Chapters 3, 7, 8, and 11. In addition to these course formats this book can be used as the basic text for a thorough, two-semester sequence on linear systems. Alternatively, the portions of the book not used in a first course on signals and systems can, together with other sources, form the basis for a subsequent course. For example, much of the material in this book forms a direct bridge to subjects such as state space analysis, control systems, digital signal processing, communications and statistical signal processing. Consequently, a follow-on course can be constructed that uses some of the topics in this book together with supplementary material in order to provide an introduction to one or more of these advanced subjects. In fact, a new course following this model has been developed at MIT and has proven not only to be a popular course among our students but also a crucial component of our signals and systems curriculum. As it was with the first edition, in the process of writing this book we have been fortunate to have received assistance, suggestions, and support from numerous colleagues, students and friends. The ideas and perspectives that form the heart of this book have continued to evolve as a result of our own experiences in teaching signals and systems and the influences

xxiv

Preface

of the many colleagues and students with whom we have worked. We would like to thank Professor Ian T. Young for his contributions to the first edition of this book and to thank and welcome Professor Hamid Nawab for the significant role he played in the development and complete restructuring of the examples and problems for this second edition. We also express our appreciation to John Buck, Michael Daniel and Andrew Singer for writing the MATLAB companion to the text. In addition, we would like to thank Jason Oppenheim for the use of one of his original photographs and Vivian Berman for her ideas and help in arriving at a cover design. Also, as indicated on the acknowledgement page that follows, we are deeply grateful to the many students and colleagues who devoted a significant number of hours to a variety of aspects of the preparation of this second edition. We would also like to express our sincere thanks to Mr. Ray Stata and Analog Devices, Inc. for their generous and continued support of signal processing and this text through funding of the Distinguished Professor Chair in Electrical Engineering. We also thank M.I.T. for providing support and an invigorating environment in which to develop our ideas. The encouragement, patience, technical support, and enthusiasm provided by PrenticeHall, and in particular by Marcia Horton, Tom Robbins, Don Fowley, and their predecessors and by Ralph Pescatore of TKM Productions and the production staff at Prentice-Hall, have been crucial in making this second edition a reality. Alan V. Oppenheim Alan S. Willsky Cambridge, Massachusetts

AcKNOWLEDGMENTS In producing this second edition we were fortunate to receive the assistance of many colleagues, students, and friends who were extremely generous with their time. We express our deep appreciation to:

Jon Maiara and Ashok Popat for their help in generating many of the figures and images. Babak Ayazifar and Austin Frakt for their help in updating and assembling the bibliography. Ramamurthy Mani for preparing the solutions manual for the text and for his help in generating many of t.he figures. Michael Daniel for coordinating and managing the LaTeX files as the various drafts of the second edition were being produced and modified. John Buck for his thorough reading of the entire draft of this second edition. Robert Becker, Sally Bemus, Maggie Beucler, Ben Halpern, Jon Maira, Chirag Patel, and Jerry Weinstein for their efforts in producing the various LaTeX drafts of the book. And to all who helped in careful reviewing of the page proofs:

Babak Ayazifar Richard Barron Rebecca Bates George Bevis Sarit Birzon Nabil Bitar Nirav Dagli Anne Findlay Austin Frakt Siddhartha Gupta Christoforos Hadjicostis Terrence Ho Mark Ibanez Seema Jaggi Patrick Kreidl

Christina Lamarre Nicholas Laneman Li Lee Sean Lindsay Jeffrey T. Ludwig Seth Pappas Adrienne Prahler Ryan Riddolls Alan Seefeldt Sekhar Tatikonda Shawn Verbout Kathleen Wage Alex Wang Joseph Winograd

XXV

This page intentionally left blank

FoREWORD The concepts of signals and systems arise in a wide variety of fields, and the ideas and techniques associated with these concepts play an important role in such diverse areas of science and technology as communications, aeronautics and astronautics, circuit design, acoustics, seismology, biomedical engineering, energy generation and distribution systems, chemical process control, and speech processing. Although the physical nature of the signals and systems that arise in these various disciplines may be drastically different, they all have two very basic features in common. The signals, which are functions of one or more independent variables, contain information about the behavior or nature of some phenomenon, whereas the systems respond to particular signals by producing other signals or some desired behavior. Voltages and currents as a function of time in an electrical circuit are examples of signals, and a circuit is itself an example of a system, which in this case responds to applied voltages and currents. As another example, when an automobile driver depresses the accelerator pedal, the automobile responds by increasing the speed of the vehicle. In this case, the system is the automobile, the pressure on the accelerator pedal the input to the system, and the automobile speed the response. A computer program for the automated diagnosis of electrocardiograms can be viewed as a system which has as its input a digitized electrocardiogram and which produces estimates of parameters such as heart rate as outputs. A camera is a system that receives light from different sources and reflected from objects and produces a photograph. A robot arm is a system whose movements are the response to control inputs. In the many contexts in which signals and systems arise, there are a variety of problems and questions that are of importance. In some cases, we are presented with a specific system and are interested in characterizing it in detail to understand how it will respond to various inputs. Examples include the analysis of a circuit in order to quantify its response to different voltage and current sources and the determination of an aircraft’s response characteristics both to pilot commands and to wind gusts. In other problems of signal and system analysis, rather than analyzing existing systems, our interest may be focused on designing systems to process signals in particular ways. One very common context in which such problems arise is in the design of systems to enhance or restore signals that have been degraded in some way. For example, when a pilot is communicating with an air traffic control tower, the communication can be degraded by the high level of background noise in the cockpit. In this and many similar cases, it is possible to design systems that will retain the desired signal, in this case the pilot’s voice, and reject (at least approximately) the unwanted signal, i.e., the noise. A similar set of objectives can also be found in the general area of image restoration and image enhancement. For example, images from deep space probes or earth-observing satellites typically represent degraded versions of the scenes being imaged because of limitations of the imaging equipment, atmospheric effects, and errors in signal transmission in returning the images to earth. Consequently, images returned from space are routinely processed by systems to compensate for some of these degradations. In addition, such images are usuxxvii

xxviii

Foreworc

ally processed to enhance certain features, such as lines (corresponding, for example, to river beds or faults) or regional boundaries in which there are sharp contrasts in color or darkness. In addition to enhancement and restoration, in many applications there is a need to design systems to extract specific pieces of information from signals. The estimation of heart rate from an electrocardiogram is one example. Another arises in economic forecasting. We may, for example, wish to analyze the history of an economic time series, such as a set of stock market averages, in order to estimate trends and other characteristics such as seasonal variations that may be of use in making predictions about future behavior. In other applications, the focus may be on the design of signals with particular properties. Specifically, in communications applications considerable attention is paid to designing signals to meet the constraints and requirements for successful transmission. For example, long distance communication through the atmosphere requires the use of signals with frequencies in a particular part of the electromagnetic spectrum. The design of communication signals must also take into account the need for reliable reception in the presence of both distortion due to transmission through the atmosphere and interference from other signals being transmitted simultaneously by other users. Another very important class of applications in which the concepts and techniques of signal and system analysis arise are those in which we wish to modify or control the characteristics of a given system, perhaps through the choice of specific input signals or by combining the system with other systems. Illustrative of this kind of application is the design of control systems to regulate chemical processing plants. Plants of this type are equipped with a variety of sensors that measure physical signals such as temperature, humidity, and chemical composition. The control system in such a plant responds to these sensor signals by adjusting quantities such as flow rates and temperature in order to regulate the ongoing chemical process. The design of aircraft autopilots and computer control systems represents another example. In this case, signals measuring aircraft speed, altitude, and heading are used by the aircraft’s control system in order to adjust variables such as throttle setting and the position of the rudder and ailerons. These adjustments are made to ensure that the aircraft follows a specified course, to smooth out the aircraft’s ride, and to enhance its responsiveness to pilot commands. In both this case and in the previous example of chemical process control, an important concept, referred to as feedback, plays a major role, as measured signals are fed back and used to adjust the response characteristics of a system. The examples in the preceding paragraphs represent only a few of an extraordinarily wide variety of applications for the concepts of signals and systems. The importance of these concepts stems not only from the diversity of phenomena and processes in which they arise, but also from the collection of ideas, analytical techniques, and methodologies that have been and are being developed and used to solve problems involving signals and systems. The history of this development extends back over many centuries, and although most of this work was motivated by specific applications, many of these ideas have proven to be of central importance to problems in a far larger variety of contexts than those for which they were originally intended. For example, the tools of Fourier analysis, which form the basis for the frequency-domain analysis of signals and systems, and which we will develop in some detail in this book, can be traced from problems of astronomy studied by the ancient Babylonians to the development of mathematical physics in the eighteenth and nineteenth centuries.

Foreword

xxix

In some of the examples that we have mentioned, the signals vary continuously in time, whereas in others, their evolution is described only at discrete points in time. For example, in the analysis of electrical circuits and mechanical systems we are concerned with signals that vary continuously. On the other hand, the daily closing stock market average is by its very nature a signal that evolves at discrete points in time (i.e., at the close of each day). Rather than a curve as a function of a continuous variable, then, the closing stock market average is a sequence of numbers associated with the discrete time instants at which it is specified. This distinction in the basic description of the evolution of signals and of the systems that respond to or process these signals leads naturally to two parallel frameworks for signal and system analysis, one for phenomena and processes that are described in continuous time and one for those that are described in discrete time. The concepts and techniques associated both with continuous-time signals and systems and with discrete-time signals and systems have a rich history and are conceptually closely related. Historically, however, because their applications have in the past been sufficiently different, they have for the most part been studied and developed somewhat separately. Continuous-time signals and systems have very strong roots in problems associated with physics and, in the more recent past, with electrical circuits and communications. The techniques of discrete-time signals and systems have strong roots in numerical analysis, statistics, and time-series analysis associated with such applications as the analysis of economic and demographic data. Over the past several decades, however, the disciplines of continuous-time and discrete-time signals and systems have become increasingly entwined and the applications have become highly interrelated. The major impetus for this has come from the dramatic advances in technology for the implementation of systems and for the generation of signals. Specifically, the continuing development of high-speed digital computers, integrated circuits, and sophisticated high-density device fabrication techniques has made it increasingly advantageous to consider processing continuous-time signals by representing them by time samples (i.e., by converting them to discrete-time signals). As one example, the computer control system for a modem high-performance aircraft digitizes sensor outputs such as vehicle speed in order to produce a sequence of sampled measurements which are then processed by the control system. Because of the growing interrelationship between continuous-time signals and systems and discrete-time signals and systems and because of the close relationship among the concepts and techniques associated with each, we have chosen in this text to develop the concepts of continuous-time and discrete-time signals and systems in parallel. Since many of the concepts are similar (but not identical), by treating them in parallel, insight and intuition can be shared and both the similarities and differences between them become better focused. In addition, as will be evident as we proceed through the material, there are some concepts that are inherently easier to understand in one framework than the other and, once understood, the insight is easily transferable. Furthermore, this parallel treatment greatly facilitates our understanding of the very important practical context in which continuous and discrete time are brought together, namely the sampling of continuous-time signals and the processing of continuous-time signals using discrete-time systems. As we have so far described them, the notions of signals and systems are extremely general concepts. At this level of generality, however, only the most sweeping statements can be made about the nature of signals and systems, and their properties can be discussed only in the most elementary terms. On the other hand, an important and fundamental notion in dealing with signals and systems is that by carefully choosing subclasses of each with

XXX

Foreword

particular properties that can then be exploited, we can analyze and characterize these signals and systems in great depth. The principal focus in this book is on the particular class of linear time-invariant systems. The properties of linearity and time invariance that define this class lead to a remarkable set of concepts and techniques which are not only of major practical importance but also analytically tractable and intellectually satisfying. As we have emphasized in this foreword, signal and system analysis has a long history out of which have emerged some basic techniques and fundamental principles which have extremely broad areas of application. Indeed, signal and system analysis is constantly evolving and developing in response to new problems, techniques, and opportunities. We fully expect this development to accelerate in pace as improved technology makes possible the implementation of increasingly complex systems and signal processing techniques. In the future we will see signals and systems tools and concepts applied to an expanding scope of applications. For these reasons, we feel that the topic of signal and system analysis represents a body of knowledge that is of essential concern to the scientist and engineer. We have chosen the set of topics presented in this book, the organization of the presentation, and the problems in each chapter in a way that we feel will most help the reader to obtain a solid foundation in the fundamentals of signal and system analysis; to gain an understanding of some of the very important and basic applications of these fundamentals to problems in filtering, sampling, communications, and feedback system analysis; and to develop some appreciation for an extremely powerful and broadly applicable approach to formulating and solving complex problems.

1 SIGNALS AND SYSTEMS

1.0 INTRODUCTION As described in the Foreword, the intuitive notions of signals and systems arise in a rich variety of contexts. Moreover, as we will see in this book, there is an analytical frameworkthat is, a language for describing signals and systems and an extremely powerful set of tools for analyzing them-that applies equally well to problems in many fields. In this chapter, we begin our development of the analytical framework for signals and systems by introducing their mathematical description and representations. In the chapters that follow, we build on this foundation in order to develop and describe additional concepts and methods that add considerably both to our understanding of signals and systems and to our ability to analyze and solve problems involving signals and systems that arise in a broad array of applications.

1. 1 CONTINUOUS-TIME AND DISCRETE-TIME SIGNALS 1 . 1 . 1 Examples and Mathematical Representation Signals may describe a wide variety of physical phenomena. Although signals can be represented in many ways, in all cases the information in a signal is contained in a pattern of variations of some form. For example, consider the simple circuit in Figure 1.1. In this case, the patterns of variation over time in the source and capacitor voltages, v, and Vc, are examples of signals. Similarly, as depicted in Figure 1.2, the variations over time of the applied force f and the resulting automobile velocity v are signals. As another example, consider the human vocal mechanism, which produces speech by creating fluctuations in acoustic pressure. Figure 1.3 is an illustration of a recording of such a speech signal, obtained by 1

Signals and Systems

2

Chap. 1

R

c ~pv

Figure 1.2 An automobile responding to an applied force t from the engine and to a retarding frictional force pv proportional to the automobile’s velocity v.

A simple RC circuit with source and capacitor voltage Vc.

Figure 1. 1

voltage

Vs

~——————-200msec——————–~

I

I

1_ _ _ _ _

sh

j

r – – – –

I

.!_ _____ 1_ _ _ _ _ !._ _____1 _ _ _ _ _ _ _ _ _ _

-~-

~

oul

– – – – I – – – – I – – – – –

~-

I

_ _ _ _ _ I_ _ _ _ _ J

d

– – – – I – – – –

-~-

– – – – I – – – –

I

I

-~

I

I

I

I I

I

I

~——————————————-

w

e

Example of a recording of speech. [Adapted from Applications of Digital Signal Processing, A.V. Oppenheim, ed. (Englewood Cliffs, N.J.: Prentice-Hall, Inc., 1978), p. 121.] The signal represents acoustic pressure variations as a function of time for the spoken words “should we chase.” The top line of the figure corresponds to the word “should,” the second line to the word “we,” and the last two lines to the word “chase.” (We have indicated the approximate beginnings and endings of each successive sound in each word.) Figure 1.3

r – – – –

-~-

I

– – – – I – – – – I – – – – –

~-

– – – – I – – – –

-~-

– – – – I – – – –

I

-~

I

I ~ _ _ _ _ _ 1 _ _ _ _ _ ~ _ _ _ _ ~ _ _ _ _ _ 1_ _ _ _ _

.!_ _____ I _ _ _ _ _

a

ch

I

1_ _ _ _ _

I ~

_ _ _ _ _ 1_ _ _ _ _

a

~

~ _____I

_ _ _ _ _1 _ _ _ _ _ I_ _ _ _ _

I

I

I

~

_ _ _ _ _ 1_ _ _ _ _ J

se

using a microphone to sense variations in acoustic pressure, which are then converted into an electrical signal. As can be seen in the figure, different sounds correspond to different patterns in the variations of acoustic pressure, and the human vocal system produces intelligible speech by generating particular sequences of these patterns. Alternatively, for the monochromatic picture, shown in Figure 1.4, it is the pattern of variations in brightness across the image, that is important.

Continuous-Time and Discrete-Time Signals

Sec. 1.1

3

Figure 1.4

A monochromatic

picture.

Signals are represented mathematically as functions of one or more independent variables. For example, a speech signal can be represented mathematically by acoustic pressure as a function of time, and a picture can be represented by brightness as a function of two spatial variables. In this book, we focus our attention on signals involving a single independent variable. For convenience, we will generally refer to the independent variable as time, although it may not in fact represent time in specific applications. For example, in geophysics, signals representing variations with depth of physical quantities such as density, porosity, and electrical resistivity are used to study the structure of the earth. Also, knowledge of the variations of air pressure, temperature, and wind speed with altitude are extremely important in meteorological investigations. Figure 1.5 depicts a typical example of annual average vertical wind profile as a function of height. The measured variations of wind speed with height are used in examining weather patterns, as well as wind conditions that may affect an aircraft during final approach and landing. Throughout this book we will be considering two basic types of signals: continuoustime signals and discrete-time signals. In the case of continuous-time signals the independent variable is continuous, and thus these signals are defined for a continuum of values 26 24 22 20 18 :§’16 0

~ 14 ~ 12

~ 10

(f)

8 6 4 2 0

200

400

600

800

1,000 1,200 1,400 1,600

Height (feet)

Figure 1 .s Typical annual vertical wind profile. (Adapted from Crawford and Hudson, National Severe Storms Laboratory Report, ESSA ERLTM-NSSL 48, August 1970.)

Signals and Systems

4

Chap. 1

400350 300 250 200 150 100 50

ot Jan. 5,1929

Jan. 4,1930

Figure 1.6 An example of a discrete-time signal: The weekly Dow-Jones stock market index from January 5, 1929, to January 4, 1930.

of the independent variable. On the other hand, discrete-time signals are defined only at discrete times, and consequently, for these signals, the independent variable takes on only a discrete set of values. A speech signal as a function of time and atmospheric pressure as a function of altitude are examples of continuous-time signals. The weekly Dow-Jones stock market index, as illustrated in Figure 1.6, is an example of a discrete-time signal. Other examples of discrete-time signals can be found in demographic studies in which various attributes, such as average budget, crime rate, or pounds of fish caught, are tabulated against such discrete variables as family size, total population, or type of fishing vessel, respectively. To distinguish between continuous-time and discrete-time signals, we will use the symbol t to denote the continuous-time independent variable and n to denote the discretetime independent variable. In addition, for continuous-time signals we will enclose the independent variable in parentheses ( · ), whereas for discrete-time signals we will use brackets [ · ] to enclose the independent variable. We will also have frequent occasions when it will be useful to represent signals graphically. Illustrations of a continuous-time signal x(t) and a discrete-time signal x[n] are shown in Figure 1.7. It is important to note that the discrete-time signal x[n] is defined only for integer values of the independent variable. Our choice of graphical representation for x[ n] emphasizes this fact, and for further emphasis we will on occasion refer to x[n] as a discrete-time sequence. A discrete-time signal x[n] may represent a phenomenon for which the independent variable is inherently discrete. Signals such as demographic data are examples of this. On the other hand, a very important class of discrete-time signals arises from the sampling of continuous-time signals. In this case, the discrete-time signal x[n] represents successive samples of an underlying phenomenon for which the independent variable is continuous. Because of their speed, computational power, and flexibility, modem digital processors are used to implement many practical systems, ranging from digital autopilots to digital audio systems. Such systems require the use of discrete-time sequences representing sampled versions of continuous-time signals–e.g., aircraft position, velocity, and heading for an

Sec. 1.1

Continuous-Time and Discrete-Time Signals

5

x(t)

0 (a) x[n]

x[O]

n

Figure 1. 7

Graphical representations of (a) continuous-time and (b) discrete-

time signals.

autopilot or speech and music for an audio system. Also, pictures in newspapers-or in this book, for that matter-actually consist of a very fine grid of points, and each of these points represents a sample of the brightness of the corresponding point in the original image. No matter what the source of the data, however, the signal x[n] is defined only for integer values of n. It makes no more sense to refer to the 3 ~ th sample of a digital speech signal than it does to refer to the average budget for a family with 2~ family members. Throughout most of this book we will treat discrete-time signals and continuous-time signals separately but in parallel, so that we can draw on insights developed in one setting to aid our understanding of another. In Chapter 7 we will return to the question of sampling, and in that context we will bring continuous-time and discrete-time concepts together in order to examine the relationship between a continuous-time signal and a discrete-time signal obtained from it by sampling.

1. 1.2 Signal Energy and Power From the range of examples provided so far, we see that signals may represent a broad variety of phenomena. In many, but not all, applications, the signals we consider are directly related to physical quantities capturing power and energy in a physical system. For example, if v(t) and i(t) are, respectively, the voltage and current across a resistor with resistance R, then the instantaneous power is p(t)

=

v(t)i(t)

=

~v2 (t).

(1.1)

Signals and Systems

6

The total energy expended over the time interval 12 {

J

t]

12

p(t) dt = { Jf]

Chap. 1

t 1 :s t :s t 2 is

~v 2 (t) dt,

(1.2)

and the average power over this time interval is 1

1 2(t)dt. -1- J 2 p(t)dt = -1- Jt2 -Rv t2 – t! f] t2 – tl f]

(1.3)

Similarly, for the automobile depicted in Figure 1.2, the instantaneous power dissipated through friction is p(t) = bv 2(t), and we can then define the total energy and average power over a time interval in the same way as in eqs. (1.2) and (1.3). With simple physical examples such as these as motivation, it is a common and worthwhile convention to use similar terminology for power and energy for any continuoustime signal x(t) or any discrete-time signal x[n]. Moreover, as we will see shortly, we will frequently find it convenient to consider signals that take on complex values. In this case, the total energy over the time interval t 1 :s t :s t 2 in a continuous-time signal x(t) is defined as (1.4) where lxl denotes the magnitude of the (possibly complex) number x. The time-averaged power is obtained by dividing eq. (1.4) by the length, t2 – t 1, of the time interval. Similarly, the total energy in a discrete-time signal x[n] over the time interval n 1 :s n :s n 2 is defined as (1.5) and dividing by the number of points in the interval, n2 – n 1 + 1, yields the average power over the interval. It is important to remember that the terms “power” and “energy” are used here independently of whether the quantities in eqs. (1.4) and (1.5) actually are related to physical energy. 1 Nevertheless, we will find it convenient to use these terms in a general fashion. Furthermore, in many systems we will be interested in examining power and energy in signals over an infinite time interval, i.e., for -oo < t < +oo or for -oo < n < +oo. In these cases, we define the total energy as limits of eqs. (1.4) and (1.5) as the time interval increases without bound. That is, in continuous time, Eoo ~ }~ I T -T 2 lx(t)l dt = I +oc -oc 2 lx(t)l dt, (1.6) lx[nJI 2 . (1.7) and in discrete time, Eoo ~ lim N~oo +N L n=-N +oo lx[nJI 2 = L n=-oc 1 Even if such a relationship does exist, eqs. ( 1.4) and ( 1.5) may have the wrong dimensions and scalings. For example, comparing eqs. (1.2) and (1.4 ), we see that if x(t) represents the voltage across a resistor, then eq. (1.4) must be divided by the resistance (measured, for example, in ohms) to obtain units of physical energy. Sec. 1.2 Transformations of the Independent Variable 7 Note that for some signals the integral in eq. (1.6) or sum in eq. (1.7) might not convergee.g., if x(t) or x[n] equals a nonzero constant value for all time. Such signals have infinite energy, while signals with E~ < co have finite energy. In an analogous fashion, we can define the time-averaged power over an infinite interval as (', lim - 1 PeN = T---+~ 2 T IT -T /x(t)/ 2 dt (1.8) and PeN ~ +N 1 lim N---+~ 2N + L 1 n=-N /x[n]/ 2 (1.9) in continuous time and discrete time, respectively. With these definitions, we can identify three important classes of signals. The first of these is the class of signals with finite total energy, i.e., those signals for which Eoo 0, then, of necessity, Ex = co. This, of course, makes sense, since if there is a nonzero average energy per unit time (i.e., nonzero power), then integrating or summing this over an infinite time interval yields an infinite amount of energy. For example, the constant signal x[n] = 4 has infinite energy, but average power Px = 16. There are also signals for which neither Px nor Ex are finite. A simple example is the signal x(t) = t. We will encounter other examples of signals in each of these classes in the remainder of this and the following chapters. 1.2 TRANSFORMATIONS OF THE INDEPENDENT VARIABlE A central concept in signal and system analysis is that of the transformation of a signal. For example, in an aircraft control system, signals corresponding to the actions of the pilot are transformed by electrical and mechanical systems into changes in aircraft thrust or the positions of aircraft control surfaces such as the rudder or ailerons, which in tum are transformed through the dynamics and kinematics of the vehicle into changes in aircraft velocity and heading. Also, in a high-fidelity audio system, an input signal representing music as recorded on a cassette or compact disc is modified in order to enhance desirable characteristics, to remove recording noise, or to balance the several components of the signal (e.g., treble and bass). In this section, we focus on a very limited but important class of elementary signal transformations that involve simple modification of the independent variable, i.e., the time axis. As we will see in this and subsequent sections of this chapter, these elementary transformations allow us to introduce several basic properties of signals and systems. In later chapters, we will find that they also play an important role in defining and characterizing far richer and important classes of systems. Signals and Systems 8 Chap. 1 1.2. 1 Examples of Transformations of the Independent Variable A simple and very important example of transforming the independent variable of a signal is a time shift. A time shift in discrete time is illustrated in Figure 1.8, in which we have two signals x[n] and x[n- n0 ] that are identical in shape, but that are displaced or shifted relative to each other. We will also encounter time shifts in continuous time, as illustrated in Figure 1.9, in which x(t - t0 ) represents a delayed (if to is positive) or advanced (if to is negative) version of x(t). Signals that are related in this fashion arise in applications such as radar, sonar, and seismic signal processing, in which several receivers at different locations observe a signal being transmitted through a medium (water, rock, air, etc.). In this case, the difference in propagation time from the point of origin of the transmitted signal to any two receivers results in a time shift between the signals at the two receivers. A second basic transformation of the time axis is that of time reversal. For example, as illustrated in Figure 1.1 0, the signal x[- n] is obtained from the signal x[ n] by a reflection about n = 0 (i.e., by reversing the signal). Similarly, as depicted in Figure 1.11, the signal x(- t) is obtained from the signal x(t) by a reflection about t = 0. Thus, if x(t) represents an audio tape recording, then x( -t) is the same tape recording played backward. Another transformation is that of time scaling. In Figure 1.12 we have illustrated three signals, x(t), x(2t), and x(t/2), that are related by linear scale changes in the independent variable. If we again think of the example of x(t) as a tape recording, then x(2t) is that recording played at twice the speed, and x(t/2) is the recording played at half-speed. It is often of interest to determine the effect of transforming the independent variable of a given signal x(t) to obtain a signal of the form x(at + {3), where a and {3 are given numbers. Such a transformation of the independent variable preserves the shape of x(t), except that the resulting signal may be linearly stretched if Ia I < 1, linearly compressed if Ia I > 1, reversed in time if a < 0, and shifted in time if {3 is nonzero. This is illustrated in the following set of examples. x[n] n x[n-n 0] Figure 1.8 0 n Discrete-time signals related by a time shift. In this figure n0 > 0, so that x[n- n0 ] is a delayed verson of x[n] (i.e., each point in x[n] occurs later in x[n- n0 ]).

Sec. 1.2

Transformations of the Independent Variable

9 x[n)

n

(a)

x[-n)

n

Continuous-time signals related by a time shift. In this figure t0 < 0, so that x(t - to) is an advanced version of x(t) (i.e., each point in x(t) occurs at an earlier time in x(t - to)). Figure 1.9 (b) Figure 1 .1 O (a) A discrete-time signal x[n]; (b) its reflection x[-n] about n = 0. x(t) x(t) d\ x(2t) (a) & x(-t) x(t/2) (b) (a) A continuous-time signal x(t); (b) its reflection x( - t) about t = 0. Figure 1.11 ~ Continuous-time signals related by time scaling. Figure 1. 12 Signals and Systems 10 Chap. 1 Example 1.1 Given the signal x(t) shown in Figure l.13(a), the signal x(t + 1) corresponds to an advance (shift to the left) by one unit along the taxis as illustrated in Figure l.13(b). Specifically, we note that the value of x(t) at t = to occurs in x(t + 1) at t = to - 1. For 1 1 'l'I 1 0 2 (a) 1~ -1 2 1 0 (b) -1 1 (c) 0 I 1 '1i11 -~ 0 2/3 4/3 (d) -2/3 0 2/3 (e) Figure 1. 13 (a) The continuous-time signal x(t) used in Examples 1.1-1.3 to illustrate transformations of the independent variable; (b) the time-shifted signal x(t + 1); (c) the signal x(-t + 1) obtained by a time shift and a time reversal; (d) the time-scaled signal t); and (e) the signal t + 1) obtained by time-shifting and scaling. xa xa Sec. 1.2 Transformations of the Independent Variable 11 example, the value of x(t) at t = 1 is found in x(t + 1) at t = 1 - 1 = 0. Also, since x(t) is zero fort < 0, we have x(t + 1) zero fort < -1. Similarly, since x(t) is zero for t > 2, x(t + 1) is zero for t > 1. Let us also consider the signal x( – t + 1), which may be obtained by replacing t with -t in x(t + 1). That is, x(-t + 1) is the time reversed version of x(t + 1). Thus, x( – t + 1) may be obtained graphically by reflecting x( t + 1) about the t axis as shown in Figure 1.13(c).

Example 1.2 Given the signal x(t), shown in Figure l.13(a), the signal x(~t) corresponds to a linear compression of x(t) by a factor of~ as illustrated in Figure l.13(d). Specifically we note that the value of x(t) at t = to occurs in x(~t) at t = ~t0 . For example, the value of x(t) at t = 1 is found in x(~t) at t = ~ (1) = ~-Also, since x(t) is zero fort< 0, we have x(~t) zero fort< 0. Similarly, since x(t) is zero fort> 2, x(~t) is zero fort> ~-

Example 1.3 Suppose that we would like to determine the effect of transforming the independent variable of a given signal, x(t), to obtain a signal of the form x(at + /3), where a and f3 are given numbers. A systematic approach to doing this is to first delay or advance x(t) in accordance with the value of f3, and then to perform time scaling and/or time reversal on the resulting signal in accordance with the value of a. The delayed or advanced signal is linearly stretched if fa[ < 1, linearly compressed if fa[ > 1, and reversed in time if a < 0. To illustrate this approach, let us show how x( ~ t + 1) may be determined for the signal x(t) shown in Figure 1.13(a). Since f3 = 1, we first advance (shift to the left) x(t) by 1 as shown· in Figure 1.l 3(b ). Since fa [ = ~, we may linearly compress the shifted signal of Figure 1.13(b) by a factor of~ to obtain the signal shown in Figure 1.13(e). In addition to their use in representing physical phenomena such as the time shift in a sonar signal and the speeding up or reversal of an audiotape, transformations of the independent variable are extremely useful in signal and system analysis. In Section 1.6 and in Chapter 2, we will use transformations of the independent variable to introduce and analyze the properties of systems. These transformations are also important in defining and examining some important properties of signals. 1.2.2 Periodic Signals An important class of signals that we will encounter frequently throughout this book is the class of periodic signals. A periodic continuous-time signal x(t) has the property that there is a positive value of T for which x(t) = x(t + T) (l.11) for all values oft. In other words, a periodic signal has the property that it is unchanged by a time shift of T. In this case, we say that x(t) is periodic with period T. Periodic continuoustime signals arise in a variety of contexts. For example, as illustrated in Problem 2.61, the natural response of systems in which energy is conserved, such as ideal LC circuits without resistive energy dissipation and ideal mechanical systems without frictional losses, are periodic and, in fact, are composed of some of the basic periodic signals that we will introduce in Section 1.3. Signals and Systems 12 Chap. 1 x(t) ···!\ [\ -2T -T & [\ !\··· 2T T 0 Figure 1. 14 A continuous-time periodic signal. An example of a periodic continuous-time signal is given in Figure 1.14. From the figure or from eq. ( 1.11 ), we can readily deduce that if x(t) is periodic with period T, then x(t) = x(t + mT) for all t and for any integer m. Thus, x(t) is also periodic with period 2T, 3T, 4T, .... The fundamental period To of x(t) is the smallest positive value ofT for which eq. ( 1.11) holds. This definition of the fundamental period works, except if x(t) is a constant. In this case the fundamental period is undefined, since x(t) is periodic for any choice ofT (so there is no smallest positive value). A signal x(t) that is not periodic will be referred to as an aperiodic signal. Periodic signals are defined analogously in discrete time. Specifically, a discretetime signal x[n] is periodic with period N, where N is a positive integer, if it is unchanged by a time shift of N, i.e., if x[n] = x[n + N] (1.12) for all values of n. If eq. (1.12) holds, then x[n] is also periodic with period 2N, 3N, .... The fundamental period N0 is the smallest positive value of N for which eq. ( 1.12) holds. An example of a discrete-time periodic signal with fundamental period No = 3 is shown in Figure 1.15. x[n] A discrete-time periodic signal with fundamental period Figure 1 . 1 5 n No= 3. Example 1.4 Let us illustrate the type of problem solving that may be required in determining whether or not a given signal is periodic. The signal whose periodicity we wish to check is given by X (t ) = { cos(t) . sm(t) if . t< 0 . If t ~ 0 (1.13) From trigonometry, we know that cos(t + 27T) = cos(t) and sin(t + 27T) = sin(t). Thus, considering t > 0 and t < 0 separately, we see that x(t) does repeat itself over every interval oflength 27T. However, as illustrated in Figure 1.16, x(t) also has a discontinuity at the time origin that does not recur at any other time. Since every feature in the shape of a periodic signal must recur periodically, we conclude that the signal x(t) is not periodic. Sec. 1.2 Transformations of the Independent Variable 13 x(t) The signal x{t) considered in Example 1.4. Figure 1. 16 1 .2.3 Even and Odd Signals Another set of useful properties of signals relates to their symmetry under time reversal. A signal x(t) or x[n] is referred to as an even signal if it is identical to its time-reversed counterpart, i.e., with its reflection about the origin. In continuous time a signal is even if x(- t) = x(t), (1.14) x[- n] = x[n]. ( 1.15) x( -t) x(t), ( 1.16) x[-n] = -x[n]. (1.17) while a discrete-time signal is even if A signal is referred to as odd if = - An odd signal must necessarily be 0 at t = 0 or n = 0, since eqs. ( 1.16) and ( 1.17) require that x(O) = - x(O) and x[O] = - x[O]. Examples of even and odd continuous-time signals are shown in Figure 1.17. x(t) 0 (a) x(t) Figure 1. 1 7 (a) An even continuous-time signal; (b) an odd continuous-time signal. Signals and Systems 14 Chap. 1 x[n] = { 1, n;::::: 0 0, n < 0 -3-2-1 0 1 2 3 Sv{x[nl} = { n ~: ~: ~ 2, n > 0

t t 1I~~ t t… 1

-3-2-1

n

0 1 2 3

ea{x[nl}=

~· n < 0 ?·n=O { 2, n > 0

1

rrr ···1110123 2

-3-2-1

n

1

-2

Figure 1. 18 Example of the evenodd decomposition of a discrete-time signal.

An important fact is that any signal can be broken into a sum of two signals, one of which is even and one of which is odd. To see this, consider the signal 8v { x(t)} =

1

2

[x(t) + x(- t)],

( 1.18)

which is referred to as the even part of x(t). Similarly, the odd part of x(t) is given by 0d{x(t)} =

1

[x(t)- x( -t)].

2

(1.19)

It is a simple exercise to check that the even part is in fact even, that the odd part is odd, and that x(t) is the sum of the two. Exactly analogous definitions hold in the discretetime case. An example of the even -odd decomposition of a discrete-time signal is given in Figure 1.18.

1 .3 EXPONENTIAL AND SINUSOIDAL SIGNALS In this section and the next, we introduce several basic continuous-time and discrete-time signals. Not only do these signals occur frequently, but they also serve as basic building blocks from which we can construct many other signals.

Sec. 1.3

Exponential and Sinusoidal Signals

15

1 .3. 1 Continuous-Time Complex Exponential and Sinusoidal Signals The continuous-time complex exponential signal is of the form x(t) = Ce 01 ,

(1.20)

where C and a are, in general, complex numbers. Depending upon the values of these parameters, the complex exponential can exhibit several different characteristics. Real Exponential Signals

As illustrated in Figure 1.19, if C and a are real [in which case x(t) is called a real exponential], there are basically two types of behavior. If a is positive, then as t increases x(t) is a growing exponential, a form that is used in describing many different physical processes, including chain reactions in atomic explosions and complex chemical reactions. If a is negative, then x(t) is a decaying exponential, a signal that is also used to describe a wide variety of phenomena, including the process of radioactive decay and the responses of RC circuits and damped mechanical systems. In particular, as shown in Problems 2.61 and 2.62, the natural responses of the circuit in Figure 1.1 and the automobile in Figure 1.2 are decaying exponentials. Also, we note that for a = 0, x(t) is constant.

x(t)

(a)

x(t)

(b)

Figure 1.19 Continuous-time real exponential x(t) = Ce 31 : (a) a > 0; (b) a< 0. Signals and Systems 16 Chap. 1 Periodic Complex Exponential and Sinusoidal Signals A second important class of complex exponentials is obtained by constraining a to be purely imaginary. Specifically, consider (1.21) An important property of this signal is that it is periodic. To verify this, we recall from eq. (1.11) that x(t) will be periodic with period T if (1.22) Or, since it follows that for periodicity, we must have (1.23) If w 0 = 0, then x(t) = 1, which is periodic for any value ofT. If w 0 =/:- 0, then the fundamental period To of x(t)-that is, the smallest positive value ofT for which eq. (1.23) holds-is 21T To = (1.24) lwol' Thus, the signals eiwot and e- Jwot have the same fundamental period. A signal closely related to the periodic complex exponential is the sinusoidal signal x(t) = A cos(wot + cf>),

(1.25)

as illustrated in Figure 1.20. With seconds as the units oft, the units of cf> and w 0 are radians and radians per second, respectively. It is also common to write w 0 = 21T fo, where fo has the units of cycles per second, or hertz (Hz). Like the complex exponential signal, the sinusoidal signal is periodic with fundamental period T 0 given by eq. (1.24). Sinusoidal and

x(t) = A cos (w 0t

+ )

Figure 1.20

soidal signal.

Continuous-time sinu-

Sec. 1.3

Exponential and Sinusoidal Signals

17

complex exponential signals are also used to describe the characteristics of many physical processes-in particular, physical systems in which energy is conserved. For example, as shown in Problem 2.61, the natural response of an LC circuit is sinusoidal, as is the simple harmonic motion of a mechanical system consisting of a mass connected by a spring to a stationary support. The acoustic pressure variations corresponding to a single musical tone are also sinusoidal. By using Euler’s relation? the complex exponential in eq. (1.21) can be written in terms of sinusoidal signals with the same fundamental period: e.iwot = cos Wot + j sin wot.

(1.26)

Similarly, the sinusoidal signal of eq. (1.25) can be written in terms of periodic complex exponentials, again with the same fundamental period: (1.27) Note that the two exponentials in eq. (1.27) have complex amplitudes. Alternatively, we can express a sinusoid in terms of a complex exponential signal as (1.28) where, if cis a complex number, CRe{ c} denotes its real part. We will also use the notation 9m{c} for the imaginary part of c, so that, for example, A sin(wot + ¢)

=

A9m{ej(wut+¢l}.

(1.29)

From eq. (1.24), we see that the fundamental period T0 of a continuous-time sinusoidal signal or a periodic complex exponential is inversely proportional to lw 0 j, which we will refer to as the fundamental frequency. From Figure 1.21, we see graphically what this means. If we decrease the magnitude of w 0 , we slow down the rate of oscillation and therefore increase the period. Exactly the opposite effects occur if we increase the magnitude of w 0 . Consider now the case w0 = 0. In this case, as we mentioned earlier, x(t) is constant and therefore is periodic with period T for any positive value of T. Thus, the fundamental period of a constant signal is undefined. On the other hand, there is no ambiguity in defining the fundamental frequency of a constant signal to be zero. That is, a constant signal has a zero rate of oscillation. Periodic signals-and in particular, the complex periodic exponential signal in eq. (1.21) and the sinusoidal signal in eq. (1.25)-provide important examples of signals with infinite total energy but finite average power. For example, consider the periodic exponential signal of eq. (1.21), and suppose that we calculate the total energy and average power in this signal over one period: E period =

f.oTo

je.iwoti2 dt

( 1.30) =

foT”

I · d t = To.

2 Euler’s relation and other basic ideas related to the manipulation of complex numbers and exponentials are considered in the mathematical review section of the problems at the end of the chapter.

Signals and Systems

18

Chap. 1

(a)

(b)

Figure 1.21 Relationship between the fundamental frequency and period for continuous-time sinusoidal signals; here, w1 > £1>2 > w 3 , which implies that T1 < T2 < r3. (c) 1 Pperiod = T Eperiod = 1. (1.31) 0 Since there are an infinite number of periods as t ranges from -'X! to +oo, the total energy integrated over all time is infinite. However, each period of the signal looks exactly the same. Since the average power of the signal equals 1 over each period, averaging over multiple periods always yields an average power of 1. That is, the complex periodic ex- Sec. 1.3 Exponential and Sinusoidal Signals 19 ponential signal has finite average power equal to Px = _lim _1 r~x 2T f T -T leiwotl2 dt = 1. (1.32) Problem 1.3 provides additional examples of energy and power calculations for periodic and aperiodic signals. Periodic complex exponentials will play a central role in much of our treatment of signals and systems, in part because they serve as extremely useful building blocks for many other signals. We will often find it useful to consider sets of harmonically related complex exponentials- that is, sets of periodic exponentials, all of which are periodic with a common period T0 . Specifically, a necessary condition for a complex exponential ejwr to be periodic with period T0 is that (1.33) which implies that wT0 is a multiple of 27T, i.e., k = 0,::!::: 1, ±:2, wTo = 21Tk, 0. 0 ( 1.34) 0 Thus, if we define wo 27T To' = ( 1.35) we see that, to satisfy eq. ( 1.34), w must be an integer multiple of w0 . That is, a harmonically related set of complex exponentials is a set of periodic exponentials with fundamental frequencies that are all multiples of a single positive frequency w0 : k = 0, ::!::: 1, ±:2,. 0. 0 ( 1.36) Fork = 0, ¢k(t) is a constant, while for any other value of k, ¢k(t) is periodic with fundamental frequency Iklwo and fundamental period 27T _ To lklwo - m· ( 1.37) The kth harmonic ¢k(t) is still periodic with period T0 as well, as it goes through exactly lkl of its fundamental periods during any time interval of length T0 . Our use of the term "harmonic" is consistent with its use in music, where it refers to tones resulting from variations in acoustic pressure at frequencies that are integer multiples of a fundamental frequency. For example, the pattern of vibrations of a string on an instrument such as a violin can be described as a superposition-i.e., a weighted sum-of harmonically related periodic exponentials. In Chapter 3, we will see that we can build a very rich class of periodic signals using the harmonically related signals of eq. ( 1.36) as the building blocks. Example 1.5 It is sometimes desirable to express the sum of two complex exponentials as the product of a single complex exponential and a single sinusoid. For example, suppose we wish to Signals and Systems 20 Chap. 1 plot the magnitude of the signal (1.38) To do this, we first factor out a complex exponential from the right side of eq. (1.38), where the frequency of this exponential factor is taken as the average of the frequencies of the two exponentials in the sum. Doing this, we obtain (1.39) which, because of Euler's relation, can be rewritten as x(t) = 2ej 2.51 cos(0.5t). (1.40) From this, we can directly obtain an expression for the magnitude of x(t): lx(t)l = 21 cos(0.5t)l. (1.41) Here, we have used the fact that the magnitude of the complex exponential ej 2·51 is always unity. Thus, lx(t)l is what is commonly referred to as a full-wave rectified sinusoid, as shown in Figure 1.22. lx(t)l 2 Figure 1 .22 The full-wave rectified sinusoid of Example 1.5. General Complex Exponential Signals The most general case of a complex exponential can be expressed and interpreted in terms of the two cases we have examined so far: the real exponential and the periodic complex exponential. Specifically, consider a complex exponential C eat, where C is expressed in polar form and a in rectangular form. That is, and a= r + Jwo. Then (1.42) Using Euler's relation, we can expand this further as C eat = ICiert cos(wot + 0) + JICiert sin(wot + 0). (1.43) Sec. 1.3 Exponential and Sinusoidal Signals 21 Thus, for r = 0, the real and imaginary parts of a complex exponential are sinusoidal. For r > 0 they correspond to sinusoidal signals multiplied by a growing exponential, and for r < 0 they correspond to sinusoidal signals multiplied by a decaying exponential. These two cases are shown in Figure 1.23. The dashed lines in the figure correspond to the functions ± ICiert. From eq. ( 1.42), we see that ICiert is the magnitude of the complex exponential. Thus, the dashed curves act as an envelope for the oscillatory curve in the figure in that the peaks of the oscillations just reach these curves, and in this way the envelope provides us with a convenient way to visualize the general trend in the amplitude of the oscillations. x(t) (a) x(t) (b) Figure 1 .23 (a) Growing sinusoidal signal x(t) = Cert cos (w0 t + 8), r > 0; (b) decaying sinusoid x{t) = Cert cos (w 0 t + 8), r < 0. Sinusoidal signals multiplied by decaying exponentials are commonly referred to as damped sinusoids. Examples of damped sinusoids arise in the response of RLC circuits and in mechanical systems containing both damping and restoring forces, such as automotive suspension systems. These kinds of systems have mechanisms that dissipate energy (resistors, damping forces such as friction) with oscillations that decay in time. Examples illustrating such systems and their damped sinusoidal natural responses can be found in Problems 2.61 and 2.62. 1.3.2 Discrete-Time Complex Exponential and Sinusoidal Signals As in continuous time, an important signal in discrete time is the complex exponential signal or sequence, defined by ( 1.44) Signals and Systems 22 Chap. 1 where C and a are, in general, complex numbers. This could alternatively be expressed in the form x[n] = Cef3 11 , (1.45) where Although the form of the discrete-time complex exponential sequence given in eq. (1.45) is more analogous to the form of the continuous-time exponential, it is often more convenient to express the discrete-time complex exponential sequence in the form of eq. (1.44). Real Exponential Signals If C and a are rea

ALAN V. OPPENHEIM and ALAN S. WILLSKY with S. HAMID NAWAB

Advertisement

This book is the second edition of a text designed for undergraduate courses in signals and systems. While such courses are frequently found in electrical engineering curricula, the concepts and techniques that form the core of the subject are of fundamental importance in all engineering disciplines. In fact, the scope of potential and actual applications of the methods of signal and system analysis continues to expand as engineers are confronted with new challenges involving the synthesis or analysis of complex processes. For these reasons we feel that a course in signals and systems not only is an essential element in an engineering program but also can be one of the most rewarding, exciting, and useful courses that engineering students take during their undergraduate education. Our treatment of the subject of signals and systems in this second edition maintains the same general philosophy as in the first edition but with significant rewriting, restructuring, and additions.

These changes are designed to help both the instructor in presenting the subject material and the student in mastering it. In the preface to the first edition we stated that our overall approach to signals and systems had been guided by the continuing developments in technologies for signal and system design and implementation, which made it increasingly important for a student to have equal familiarity with techniques suitable for analyzing and synthesizing both continuous-time and discrete-time systems. As we write the preface to this second edition, that observation and guiding principle are even more true than before.

Thus, while students studying signals and systems should certainly have a solid foundation in disciplines based on the laws of physics, they must also have a firm grounding in the use of computers for the analysis of phenomena and the implementation of systems and algorithms. As a consequence, engineering curricula now reflect a blend of subjects, some involving continuous-time models and others focusing on the use of computers and discrete representations. For these reasons, signals and systems courses that bring discrete time and continuous-time concepts together in a unified way play an increasingly important role in the education of engineering students and in their preparation for current and future developments in their chosen fields.

[PDF] Signals and Systems [Alan V. Oppenheim, Alan S. Willsky with S. Hamid Nawab] [2nd Edition].pdf

Designed and built with ♥ by Erik Fong . Licensed under the MIT License. The source code can be found at Github

♥ Please donate to keep our website running. ♥

1JBEG65wHHw1TpjJps52vMR5vYZhggQmNG

0x4B6F4c9817eec0BFa7e1c51B232a114de0Bc9B2B

BITCOIN:ETHEREUM:

Signals And Systems, 2nd Edition [PDF] [7h4hl19o34v0]

E-Book Overview

E-Book Content

CHAPTER 1 1.1 to 1.41 – part of text

1.42

(a) Periodic: Fundamental period = 0.5s (b) Nonperiodic (c) Periodic Fundamental period = 3s (d) Periodic Fundamental period = 2 samples (e) Nonperiodic (f) Periodic: Fundamental period = 10 samples (g) Nonperiodic (h) Nonperiodic (i) Periodic: Fundamental period = 1 sample

l.43

π 2 y ( t ) =  3 cos  200t + —    6  2 π = 9 cos  200t + —  6

9 π = — cos  400t + — 1  2 3 9 (a) DC component = –2 9 π (b) Sinusoidal component = — cos  400t + —  2 3 9 Amplitude = –2

1

200 Fundamental frequency = ——— Hz π

1.44

The RMS value of sinusoidal x(t) is A ⁄ 2 . Hence, the average power of x(t) in a 1-ohm 2

resistor is ( A ⁄ 2 ) = A2/2.

1.45

Let N denote the fundamental period of x[N]. which is defined by 2π N = —–Ω The average power of x[n] is therefore N -1

1 2 P = —- ∑ x [ n ] N 1 = —N

n=0 N -1

∑A

n=0 2 N -1

A = —–N

1.46

2

2 2πn cos  ———- + φ  N 

∑ cos

n=0

2

 2πn ———- + φ  N 

The energy of the raised cosine pulse is E =

π⁄ω

1

∫–π ⁄ ω –4- ( cos ( ωt ) + 1 )

2

dt

1 π⁄ω 2 = — ∫ ( cos ( ωt ) + 2 cos ( ωt ) + 1 ) dt 2 0 1 π ⁄ ω 1 1 — cos ( 2ωt ) + — + 2 cos ( ωt ) + 1 dt = — ∫  2 0 2 2 1 3 π = —  —  —- = 3π ⁄ 4ω 2  2  ω

1.47

The signal x(t) is even; its total energy is therefore 5 2

E = 2 ∫ x ( t ) dt 0

2

4

5

= 2 ∫ ( 1 ) dt + 2 ∫ ( 5 – t ) dt 2

0

2

4

5

1 4 3 = 2 [ t ] t=0 + 2 – — ( 5 – t ) 3

t=4

2 26 = 8 + — = —–3 3

1.48

(a) The differentiator output is  1  y ( t ) =  –1   0

for – 5 < t < – 4 for 4 < t < 5 otherwise (b) The energy of y(t) is E = –4 ∫–5 5 ( 1 ) dt + ∫ ( – 1 ) dt 2 2 4 = 1+1 = 2 1.49 The output of the integrator is t y ( t ) = A ∫ τ dτ = At for 0 ≤ t ≤ T 0 Hence the energy of y(t) is E = 1.50 T ∫0 2 3 A T A t dt = ------------3 2 2 (a) x(5t) 1.0 -1 -0.8 (b) 0 0.8 1 t 25 t x(0.2t) 1.0 -25 -20 0 20 3 1.51 x(10t - 5) 1.0 0 1.52 0.1 0.5 0.9 1.0 t (a) x(t) 1 -1 1 2 t 3 -1 y(t - 1) -1 1 2 t 3 -1 x(t)y(t - 1) 1 1 -1 2 t 3 -1 4 1.52 (b) x(t + 1) x(t - 1) 1 1 -1 1 2 3 4 t -1 y(-t)y(-t) 1 -2 -1 1 2 3 4 3 4 t -1 x(t - 1)y(-t) 1 t -2 -1 1 2 -1 1.52 (c) -2 -1 1 2 3 3 4 3 4 t -1 -2 -1 1 2 t x(t + 1)y(t - 2) -2 -1 1 2 5 t 1.52 (d) x(t) 1 -3 -2 -1 1 2 t 3 -1 y(1/2t + 1) 6 -5 -4 -3 -2 -1 1 2 4 6 t -1.0 x(t - 1)y(-t) 1 t -3 1.52 -2 -1 -1 1 2 3 (e) x(t) 1 -4 -3 -2 -1 1 2 3 t -1 y(2 - t) 1 -4 -3 -2 2 3 t -1 x(t)y(2 - t) -1 1 2 3 -1 6 t 1.52 (f) x(t) 1 -2 -1 1 t 2 -1 y(t/2 + 1) 1.0 -5 -3 -2 -1 -6 1 1 2 t 3 -1.0 x(2t)y(1/2t + 1) +1 -0.5 1 -1 2 t -1 1.52 (g) x(4 - t) 1 -7 -6 -5 -4 -3 t -2 -1 y(t) -2 -1 1 2 4 t x(4 - t)y(t) = 0 -3 -2 -1 1 2 3 7 t 1.53 We may represent x(t) as the superposition of 4 rectangular pulses as follows: g1(t) 1 1 2 11 2 3 4 t g2(t) 1 3 4 t g3(t) 1 1 2 3 4 t g4(t) 1 1 2 3 4 t 0 To generate g1(t) from the prescribed g(t), we let g 1 ( t ) = g ( at – b ) where a and b are to be determined. The width of pulse g(t) is 2, whereas the width of pulse g1(t) is 4. We therefore need to expand g(t) by a factor of 2, which, in turn, requires that we choose 1 a = --2 The mid-point of g(t) is at t = 0, whereas the mid-point of g1(t) is at t = 2. Hence, we must choose b to satisfy the condition a(2) – b = 0 or 1 b = 2a = 2  --- = 1  2 1 Hence, g 1 ( t ) = g  --- t – 1 2  Proceeding in a similar manner, we find that 2 5 g 2 ( t ) = g  --- t – ---  3 3 g3 ( t ) = g ( t – 3 ) g 4 ( t ) = g ( 2t – 7 ) Accordingly, we may express the staircase signal x(t) in terms of the rectangular pulse g(t) as follows: 8 2 5 1 x ( t ) = g  --- t – 1 + g  --- t – --- + g ( t – 3 ) + g ( 2t – 7 )  3 3 2  1.54 (a) x(t) = u(t) - u(t - 2) 0 1 t 2 (b) x(t) = u(t + 1) - 2u(t) + u(t - 1) -2 0 1 2 -1 t 3 -1 (c) x(t) = -u(t + 3) + 2u(t +1) -2u(t - 1) + u(t - 3) 1 -3 2 3 t 0 -1 (d) x(t) = r(t + 1) - r(t) + r(t - 2) 1 -2 -1 0 1 2 t 3 (e) x(t) = r(t + 2) - r(t + 1) - r(t - 1)+ r(t - 2) 1 -3 -2 -1 0 1 t 2 9 1.55 We may generate x(t) as the superposition of 3 rectangular pulses as follows: g1(t) 1 -4 -2 0 2 4 2 4 t g2(t) 1 -4 -2 0 t g3(t) 1 -4 -2 0 2 4 t All three pulses, g1(t), g2(t), and g3(t), are symmetrically positioned around the origin: 1. g1(t) is exactly the same as g(t). 2. g2(t) is an expanded version of g(t) by a factor of 3. 3. g3(t) is an expanded version of g(t) by a factor of 4. Hence, it follows that g1 ( t ) = g ( t ) 1 g 2 ( t ) = g  --- t 3  1 g 3 ( t ) = g  --- t 4  That is, 1 1 x ( t ) = g ( t ) + g  --- t + g  --- t 3  4  1.56 (a) x[2n] o 2 o o 0 -1 1 n (b) x[3n - 1] o 2 o1 o -1 0 1 10 n 1.56 (c) y[1 - n] o1 o o o o -4 -3 -2 -1 0 -1 o 1 2 3 4 5 o o o o 2 3 4 5 o o o o n (d) y[2 - 2n] o o -3 -2 o o1 o 1 -1 -1 n (e) x[n - 2] + y[n + 2] o o 4 o o3 o 2 o o 1 -7 o -6 o -5 o -4 o -3 o -2 -1 0 1 2 3 4 o o o 5 6 7 8 o o o 5 6 7 n o (f) x[2n] + y[n - 4] o -5 -4 -3 1 o -2 2 3 -1 o o o o o -1 11 o o o 4 n 1.56 (g) x[n + 2]y[n - 2] o -5 -4 -3 -2 o -1 1 o n 1 o o o o o2 3 o o (h) x[3 - n]y[-n] 3 o o 2 o -2 o o 1 o -3 o o -1 1 2 o 7 o 8 n 3 4 5 6 3 4 o 5 o 6 o n 4 o 5 o 6 o n (i) x[-n] y[-n] o 3 o 2 o o -5 o -4 -3 -2 -1 1 1 o 2 -1 o -2 o -3 o (j) x[n]y[-2-n] o 3 2 1 o -6 o -5 o -4 -2 o -1 1 o 2 3 -3 o -1 o -2 -3 12 o o 1.56 (k) x[n + 2]y[6-n] 3 o 2 o 1 -8 o -7 o -6 o -5 -4 -3 o o o 1.57 o -2 -1 1 o 2 o 3 o 4 o 5 o 6 n -1 -2 -3 (a) Periodic Fundamental period = 15 samples (b) Periodic Fundamental period = 30 samples (c) Nonperiodic (d) Periodic Fundamental period = 2 samples (e) Nonperiodic (f) Nonperiodic (g) Periodic Fundamental period = 2π seconds (h) Nonperiodic (i) Periodic Fundamental period = 15 samples 1.58 The fundamental period of the sinusoidal signal x[n] is N = 10. Hence the angular frequency of x[n] is 2πm m: integer Ω = ----------N The smallest value of Ω is attained with m = 1. Hence, 2π π Ω = ------ = --- radians/cycle 10 5 13 1.59 The amplitude of complex signal x(t) is defined by 2 2 xR( t ) + xI ( t ) = 2 2 2 2 A cos ( ωt + φ ) + A sin ( ωt + φ ) 2 2 = A cos ( ωt + φ ) + sin ( ωt + φ ) = A 1.60 Real part of x(t) is αt Re { x ( t ) } = Ae cos ( ωt ) Imaginary part of x(t) is αt Im { x ( t ) } = Ae sin ( ωt ) 1.61 We are given     x(t ) =      t ∆ ∆ --- for – --- ≤ t ≤ --2 2 ∆ ∆ 1 for t ≥ --2 ∆ 2 for t < – --2 The waveform of x(t) is as follows x(t) 1 1 2 -∆/2 ∆/2 0 - 12 14 t The output of a differentiator in response to x(t) has the corresponding waveform: y(t) 1 δ(t - 1 ) 2 2 1/∆ t ∆/2 0 -∆/2 1 δ(t + ∆ ) 2 2 y(t) consists of the following components: 1. Rectangular pulse of duration ∆ and amplitude 1/∆ centred on the origin; the area under this pulse is unity. 2. An impulse of strength 1/2 at t = ∆/2. 3. An impulse of strength -1/2 at t = -∆/2. As the duration ∆ is permitted to approach zero, the impulses (1/2)δ(t-∆/2) and -(1/2)δ(t+∆/2) coincide and therefore cancel each other. At the same time, the rectangular pulse of unit area (i.e., component 1) approaches a unit impulse at t = 0. We may thus state that in the limit: d lim y ( t ) = lim ----- x ( t ) ∆→0 ∆ → 0 dt = δ(t ) 1.62 We are given a triangular pulse of total duration ∆ and unit area, which is symmetrical about the origin: x(t) 2/∆ slope = -4/∆2 slope = 4/∆2 area = 1 -∆/2 ∆/2 0 15 t (a) Applying x(t) to a differentiator, we get an output y(t) depicted as follows: y(t) 4/∆2 area = 2/∆ ∆/2 t -∆/2 area = 2/∆ -4/∆2 (b) As the triangular pulse duration ∆ approaches zero, the differentiator output approaches the combination of two impulse functions described as follows: • An impulse of positive infinite strength at t = 0-. • An impulse of negative infinite strength at t = 0+. (c) The total area under the differentiator output y(t) is equal to (2/∆) + (-2/∆) = 0. In light of the results presented in parts (a), (b), and (c) of this problem, we may now make the following statement: When the unit impulse δ(t) is differentiated with respect to time t, the resulting output consists of a pair of impulses located at t = 0- and t = 0+, whose respective strengths are +∞ and -∞. 1.63 From Fig. P.1.63 we observe the following: x1 ( t ) = x2 ( t ) = x3 ( t ) = x ( t ) x4 ( t ) = y3 ( t ) Hence, we may write y 1 ( t ) = x ( t )x ( t – 1 ) (1) y2 ( t ) = x ( t ) (2) y 4 ( t ) = cos ( y 3 ( t ) ) = cos ( 1 + 2x ( t ) ) (3) The overall system output is y ( t ) = y1 ( t ) + y2 ( t ) – y4 ( t ) (4) Substituting Eqs. (1) to (3) into (4): y ( t ) = x ( t )x ( t – 1 ) + x ( t ) – cos ( 1 + 2x ( t ) ) (5) Equation (5) describes the operator H that defines the output y(t) in terms of the input x(t). 16 1.64 (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) (k) (l) 1.65 Memoryless ✓ ✓ ✓ x x x ✓ x x ✓ ✓ ✓ Stable ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ Causal ✓ ✓ ✓ ✓ x ✓ x ✓ x ✓ ✓ ✓ Linear x ✓ x ✓ ✓ ✓ x ✓ ✓ ✓ ✓ x Time-invariant ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ We are given y [ n ] = a0 x [ n ] + a1 x [ n – 1 ] + a2 x [ n – 2 ] + a3 x [ n – 3 ] Let k S { x(n)} = x(n – k ) We may then rewrite Eq. (1) in the equivalent form 1 2 3 y [ n ] = a0 x [ n ] + a1 S { x [ n ] } + a2 S { x [ n ] } + a3 S { x [ n ] } 1 2 3 = ( a0 + a1 S + a2 S + a3 S ) { x [ n ] } = H { x[n]} where 1 2 H = a0 + a1 S + a2 S + a3 S . 3 . (a) Cascade implementation of operator H: x[n] a0 S S . S a2 a1 a3 Σ y[n] 17 (1) (b) Parallel implementation of operator H: x[n] 1.66 a0 .. . S1 a1 Σ S2 a2 S3 a3 y[n] Using the given input-output relation: y [ n ] = a0 x [ n ] + a1 x [ n – 1 ] + a2 x [ n – 2 ] + a3 x [ n – 3 ] we may write y [ n ] = a0 x [ n ] + a1 x [ n – 1 ] + a2 x [ n – 2 ] + a3 x [ n – 3 ] ≤ a0 x [ n ] + a1 x [ n – 1 ] + a2 x [ n – 2 ] + a3 x [ n – 3 ] ≤ a0 M x + a1 M x + a2 M x + a3 M x = ( a 0 + a 1 + a 2 + a 3 )M x where M x = x ( n ) . Hence, provided that Mx is finite, the absolute value of the output will always be finite. This assumes that the coefficients a0, a1, a2, a3 have finite values of their own. It follows therefore that the system described by the operator H of Problem 1.65 is stable. 1.67 The memory of the discrete-time described in Problem 1.65 extends 3 time units into the past. 1.68 It is indeed possible for a noncausal system to possess memory. Consider, for example, the system illustrated below: . x(n + k) ak . x[n] Sk x(n - l) Sl a0 al Σ y[n] l{x[n]} = x[n - l], we have the input-output relation That is, with S y [ n ] = a0 x [ n ] + ak x [ n + k ] + al x [ n – l ] This system is noncausal by virtue of the term akx[n + k]. The system has memory by virtue of the term alx[n - l]. 18 1.69 (a) The operator H relating the output y[n] to the input x[n] is 1 H = 1+S +S where 2 k S { x[n]} = x[n – k ] for integer k (b) The inverse operator Hinvis correspondingly defined by inv 1 H = -------------------------1 2 1+S +S Cascade implementation of the operator H is described in Fig. 1. Correspondingly, feedback implementation of the inverse operator Hinvis described in Fig. 2 x[n] . . S S Σ Fig. 1 Operator H y[n] y[n] + Σ . S . x[n] S Fig. 2 Inverse Operator Hinv Figure 2 follows directly from the relation: x[n] = y[n] – x[n – 1] – x[n – 2] 1.70 For the discrete-time system (i.e., the operator H) described in Problem 1.65 to be timeinvariant, the following relation must hold n 0 S H = HS where n0 for integer n0 (1) n S 0 { x [ n ] } = x [ n – n0 ] and 1 2 H = 1+S +S We first note that n n 1 n n +1 2 S 0H = S 0(1 + S + S ) = S 0+S 0 Next we note that HS n0 1 +S 2 = (1 + S + S )S n0 + 2 (2) n0 19 n 1+n 2+n 0 0 = S 0+S +S (3) From Eqs. (2) and (3) we immediately see that Eq. (1) is indeed satisfied. Hence, the system described in Problem 1.65 is time-invariant. 1.71 (a) It is indeed possible for a time-variant system to be linear. (b) Consider, for example, the resistance-capacitance circuit where the resistive component is time variant, as described here: R(t) i(t) v1(t) + C . . o+ v2(t) o- This circuit, consisting of the series combination of the resistor R(t) and capacitor C, is time variant because of R(t). The input of the circuit, v1(t), is defined in terms of the output v2(t) by dv 2 ( t ) v 1 ( t ) = R ( t )C --------------- + v 2 ( t ) dt Doubling the input v1(t) results in doubling the output v2(t). Hence, the property of homogeneity is satisfied. Moreover, if N v1 ( t ) = ∑ v1, k ( t ) k=1 then N v2 ( t ) = ∑ v2, k ( t ) k=1 where dv 2, k ( t ) v 1, k ( t ) = R ( t )C ------------------- + v 2, k ( t ) , dt k = 1,2,...,N Hence, the property of superposition is also satisfied. We therefore conclude that the time-varying circuit of Fig. P1.71 is indeed linear. 1.72 We are given the pth power law device: p y(t ) = x (t ) (1) 20 Let y1(t) and y2(t) be the outputs of this system produced by the inputs x1(t) and x2(t), respectively. Let x(t) = x1(t) + x2(t), and let y(t) be the corresponding output. We then note that p for p ≠ 0, 1 y ( t ) = ( x1 ( t ) + x2 ( t ) ) ≠ y1 ( t ) + y2 ( t ) Hence the system described by Eq. (1) is nonlinear. 1.73 Consider a discrete-time system described by the operator H1: H 1 : y [ n ] = a0 x [ n ] + ak x [ n – k ] This system is both linear and time invariant. Consider another discrete-time system described by the operator H2: H 2 : y [ n ] = b0 x [ n ] + bk x [ n + k ] which is also both linear and time invariant. The system H1 is causal, but the second system H2 is noncausal. 1.74 The system configuration shown in Fig. 1.56(a) is simpler than the system configuration shown in Fig. 1.56(b). They both involve the same number of multipliers and summer. however, Fig. 1.56(b) requires N replicas of the operator H, whereas Fig. 1.56(a) requires a single operator H for its implementation. 1.75 (a) All three systems • have memory because of an integrating action performed on the input, • are causal because (in each case) the output does not appear before the input, and • are time-invariant. (b) H1 is noncausal because the output appears before the input. The input-output relation of H1 is representative of a differentiating action, which by itself is memoryless. However, the duration of the output is twice as long as that of the input. This suggests that H1 may consist of a differentiator in parallel with a storage device, followed by a combiner. On this basis, H1 may be viewed as a time-invariant system with memory. System H2 is causal because the output does not appear before the input. The duration of the output is longer than that of the input. This suggests that H2 must have memory. It is time-invariant. System H3 is noncausal because the output appears before the input. Part of the output, extending from t = -1 to t = +1, is due to a differentiating action performed on the input; this action is memoryless. The rectangular pulse, appearing in the output from t = +1 to t = +3, may be due to a pulse generator that is triggered by the termination of the input. On this basis, H3 would have to be viewed as time-varying. 21 Finally, the output of H4 is exactly the same as the input, except for an attenuation by a factor of 1/2. Hence, H4 is a causal, memoryless, and time-invariant system. 1.76 H1 is representative of an integrator, and therefore has memory. It is causal because the output does not appear before the input. It is time-invariant. H2 is noncausal because the output appears at t = 0, one time unit before the delayed input at t = +1. It has memory because of the integrating action performed on the input. But, how do we explain the constant level of +1 at the front end of the output, extending from t = 0 to t = +1? Since the system is noncausal, and therefore operating in a non real-time fashion, this constant level of duration 1 time unit may be inserted into the output by artificial means. On this basis, H2 may be viewed as time-varying. H3 is causal because the output does not appear before the input. It has memory because of the integrating action performed on the input from t = 1 to t = 2. The constant level appearing at the back end of the output, from t = 2 to t = 3, may be explained by the presence of a strong device connected in parallel with the integrator. On this basis, H3 is time-invariant. Consider next the input x(t) depicted in Fig. P1.76b. This input may be decomposed into the sum of two rectangular pulses, as shown here: xA(t) x(t) 2 2 1 1 0 1 2 t xB(t) + 0 1 2 2 1 t 0 1 2 t 2 t Response of H1 to x(t): y1,A(t) y1,B(t) 2 + 1 0 1 2 t y1(t) 2 2 1 1 0 1 2 22 t 0 1 Response of H2 to x(t): y2,A(t) 2 2 + 1 1 -1 y2(t) y2,B(t) 1 2 t 0 -1 2 0 -1 2 t 1 -1 0 1 t -1 -2 -2 The rectangular pulse of unit amplitude and unit duration at the front end of y2(t) is inserted in an off-line manner by artificial means Response of H3 to x(t): y3,A(t) 2 + 1 0 1.77 1 y3(t) y3,B(t) 2 3 2 2 1 1 t 0 1 2 3 t 0 1 2 3 t (a) The response of the LTI discrete-time system to the input δ[n-1] is as follows: y[n] o 2 o 1 o 4 2 o -1 1 -1 5 o o n 3 o (b) The response of the system to the input 2δ[n] - δ[n - 2] is as follows y[n] 4o 3 2 o 1 o -2 1 o -1 0 -1 -2 4 o 2 3 o o 5 6 o o 23 n (c) The input given in Fig. P1.77b may be decomposed into the sum of 3 impulse functions: δ[n + 1], -δ[n], and 2δ[n - 1]. The response of the system to these three components is given in the following table: Time n δ[n + 1] -1 0 1 2 3 +2 -1 +1 -δ[n] 2δ[n - 1] -2 +1 -1 +4 -2 +2 Total response +1 -3 +6 -3 2 Thus, the total response y[n] of the system is as shown here: y[n] o 6 5 4 3 o 2 o 1 o -3 2 o -2 0 -1 1 4 3 5 6 o o o -1 -2 -3 o o Advanced Problems 1.78 (a) The energy of the signal x(t) is defined by E = ∞ ∫–∞ x 2 ( t ) dt Substituting x ( t ) = xe ( t ) + xo ( t ) into this formula yields E = ∞ ∫–∞ [ xe ( t ) + xo ( t ) ] ∞ 2 dt 2 = ∫–∞ [ xe ( t ) + xo ( t ) + 2xe ( t )xo ( t ) ] = ∫–∞ xe ( t ) dt + ∫–∞ xo ( t ) dt + 2 ∫–∞ xe ( t )xo ( t ) dt ∞ 2 2 2 ∞ 2 ∞ 24 dt (1) With xe(t) even and xo(t) odd, it follows that the product xe(t)xo(t) is odd, as shown by x e ( – t )x o ( – t ) = x e ( t ) [ – x o ( t ) ] = – x e ( t )x o ( t ) Hence, ∞ ∫–∞ ( x e ( t )x o ( t ) ) dt = 0 ∫–∞ ∞ x e ( t )x o ( t ) dt + ∫ x e ( t )x o ( t ) dt 0 ∞ 0 ∫–∞ ( – xe ( t )xo ( t ) ) ( –dt ) + ∫0 ( xe ( t )xo ( t ) ) dt = ∞ ∞ 0 0 = – ∫ x e ( t )x o ( t ) dt + ∫ x e ( t )x o ( t ) dt = 0 Accordingly, Eq. (1) reduces to E = ∞ ∫–∞ ∞ x e ( t ) dt + ∫ 2 2 –∞ x o ( t ) dt (b) For a discrete-time signal x[n], – ∞ ≤ n ≤ ∞ , we may similarly write ∞ E = ∑x 2 [n] n=-∞ ∞ = ∑ [ xe [ n ] + xo [ n ] ] n=-∞ ∞ = ∑ ∞ 2 xe [ n ] ∑ + n=-∞ 2 2 2 xo [ n ] ∞ +2 n=-∞ ∑ xe [ n ]xo [ n ] n=-∞ With x e [ – n ]x o [ – n ] = – x e [ n ]x o [ n ] it follows that ∞ ∑ xe [ n ]xo [ n ] = n=-∞ = 0 ∞ n=-∞ 0 n=-0 ∑ xe [ n ]xo [ n ] + ∑ xe [ n ]xo [ n ] ∞ ∑ xe [ –n ]xo [ –n ] + ∑ xe [ n ]xo [ n ] n=-∞ 0 = – ∑ x e [ n ]x o [ n ] + n=∞ n=-0 ∞ ∑ xe [ n ]xo [ n ] n=0 = 0 25 (2) Accordingly, Eq. (2) reduces to ∞ E = ∑ ∞ 2 xe [ n ] + n=-∞ 1.79 ∑ xo [ n ] 2 n=-∞ (a) From Fig. P1.79, i ( t ) = i1 ( t ) + i2 ( t ) (1) di 1 ( t ) 1 t L -------------- + Ri 1 ( t ) = ---- ∫ i 2 ( τ ) dτ dt C –∞ (2) Differentiating Eq. (2) with respect to time t: 2 di 1 ( t ) d i1 ( t ) 1 ------------- = ---- i 2 ( t ) + R L ---------------2 dt C dt (3) Eliminating i2(t) between Eqs. (1) and (2): 2 d i1 ( t ) di 1 ( t ) 1 ------------- = ---- [ i ( t ) – i 1 ( t ) ] L ---------------+ R 2 dt C dt Rearranging terms: 2 d i 1 ( t ) R di 1 ( t ) 1 1 ---------------- + --- -------------- + -------i 1 ( t ) = -------i ( t ) 2 dt LC L LC dt (4) (b) Comparing Eqs. (4) with Eq. (1.108) for the MEMS as presented in the text, we may derive the following analogy: MEMS of Fig. 1.64 LRC circuit of Fig. P1.79 y(t) i1(t) ωn 1 ⁄ LC Q ωn L 1 L ---------- = --- ---R C R x(t) 1 -------i ( t ) LC 26 1.80 (a) As the pulse duration ∆ approaches zero, the area under the pulse x∆(t) remains equal to unity, and the amplitude of the pulse approaches infinity. (b) The limiting form of the pulse x∆(t) violates the even-function property of the unit impulse: δ ( –t ) = δ ( t ) 1.81 The output y(t) is related to the input x(t) as y(t ) = H { x(t )} (1) Let T0 denote the fundamental period of x(t), assumed to be periodic. Then, by definition, x(t ) = x(t + T 0) (2) Substituting t +T0 for t into Eq. (1) and then using Eq. (2), we may write y(t + T 0) = H { x(t + T 0)} = H { x(t )} = y(t ) (3) Hence, the output y(t) is also periodic with the same period T0. 1.82 (a) For 0 ≤ t < ∞ , we have 1 –t ⁄ τ x ∆ ( t ) = ---e ∆ At t = ∆/2, we have A = x∆ ( ∆ ⁄ 2 ) 1 –∆ ⁄ ( 2τ ) = ---e ∆ Since x∆(t) is even, then 1 –∆ ⁄ ( 2τ ) A = x ∆ ( ∆ ⁄ 2 ) = x ∆ ( – ∆ ⁄ 2 ) = ---e ∆ (b) The area under the pulse x∆(t) must equal unity for δ ( t ) = lim x ∆ ( t ) ∆→0 The area under x∆(t) is 27 ∞ ∫–∞ ∞ x ∆ ( t ) dt = 2 ∫ x ∆ ( t ) dt 0 ∞ 1 –t ⁄ τ = 2 ∫ ---e dt 0 ∆ 2 –t ⁄ τ ∞ = --- ( – τ )e 0 ∆ 2τ = ----∆ For this area to equal unity, we require ∆ τ = --2 (c) 8 ∆=1 ∆ = 0.5 ∆ = 0.25 ∆ = 0.125 7 6 Amplitude 5 4 3 2 1 0 −2 1.83 −1.5 −1 −0.5 0 Time 0.5 1 1.5 2 (a) Let the integral of a continuous-time signal x(t), – ∞ < t < ∞ , be defined by y(t ) = = t ∫–∞ x ( τ ) dτ 0 t ∫–∞ x ( t ) dt + ∫0 x ( τ ) dτ 28 The definite integral 0 ∫–∞ x ( t ) dt , representing the initial condition, is a constant. With differentiation as the operation of interest, we may also write dy ( t ) x ( t ) = ------------dt Clearly, the value of x(t) is unaffected by the value assumed by the initial condition 0 ∫–∞ x ( t ) dt It would therefore be wrong to say that differentiation and integration are the inverse of each other. To illustrate the meaning of this statement, consider the two following two waveforms that differ from each other by a constant value for – ∞ < t < ∞ : x2(t) x1(t) slope = a 0 slope = a t 0 d x1 ( t ) d x2 ( t ) Yet, y ( t ) = --------------, as illustrated below: = --------------dt dt y(t) a 0 t (b) For Fig. P1.83(a): R t y ( t ) + --- ∫ y ( τ ) dτ = x ( t ) L –∞ For R/L large, we approximately have R t --- ∫ y ( τ ) dτ ≈ x ( t ) L –∞ Equivalently, we have a differentiator described by L dx ( t ) R --- large y ( t ) ≈ --- ------------- , R dt L 29 t For Fig. P1.83(b): L dy ( t ) y ( t ) + --- ------------- = x ( t ) R dt For R/L small, we approximately have L dy ( t ) --- ------------- ≈ x ( t ) R dt Equivalently, we have an integrator described by R t R --- small y ( t ) ≈ --- ∫ x ( τ ) dτ L –∞ L (c) Consider the following two scenarios describing the LR circuits of Fig. P1.83 • The input x(t) consists of a voltage source with an average value equal to zero. • The input x(t) includes a dc component E (exemplified by a battery). These are two different input conditions. Yet for large R/L, the differentiator of Fig. P1.83(a) produces the same output. On the other hand, for small R/L the integrator of Fig. P1.83(b) produces different outputs. Clearly, on this basis it would be wrong to say that these two LR circuits are the inverse of each other. 1.84 (a) The output y(t) is defined by y ( t ) = A 0 cos ( ω 0 t + φ )x ( t ) (1) This input-output relation satisfies the following two conditions: • • Homogeneity: If the input x(t) is scaled by an arbitrary factor a, the output y(t) will be scaled by the same factor. Superposition: If the input x(t) consists of two additive components x1(t) and x2(t), then y ( t ) = y1 ( t ) + y2 ( t ) where k = 1,2 y ( t ) = A 0 cos ( ω 0 t + φ )x k ( t ) , Hence, the system of Fig. P1.84 is linear. (b) For the impulse input x(t ) = δ(t ) , Eq. (1) yields the corresponding output y′ ( t ) = A 0 cos ( ω 0 t + φ )δ ( t ) 30  A cos φδ ( 0 ), =  0  0, t=0 otherwise For x ( t ) = δ ( t – t 0 ) , Eq. (1) yields y″ ( t ) = A 0 cos ( ω 0 t + φ )δ ( t – t 0 )  A cos ( ω 0 t 0 + φ )δ ( 0 ), =  0  0, t = t0 otherwise Recognizing that y′ ( t ) ≠ y″ ( t ) , the system of Fig. P1.84 is time-variant. 1.85 (a) The output y(t) is related to the input x(t) as t y ( t ) = cos  2π f c t + k ∫ x ( τ ) dτ   –∞ (1) The output is nonlinear as the system violates both the homogeneity and superposition properties: • Let x(t) be scaled by the factor a. The corresponding value of the output is t y a ( t ) = cos  2π f c t + ka ∫ x ( τ ) dτ   –∞ For a ≠ 1 , we clearly see that y a ( t ) ≠ y ( t ) . • Let x ( t ) = x1 ( t ) + x2 ( t ) Then t t y ( t ) = cos  2π f c t + k ∫ x 1 ( τ ) dτ + k ∫ x 2 ( τ ) dτ   –∞ –∞ ≠ y1 ( t ) + y2 ( t ) where y1(t) and y2(t) are the values of y(t) corresponding to x1(t) and x2(t), respectively. (b) For the impulse input x(t ) = δ(t ) , Eq. (1) yields t y′ ( t ) = cos  2π f c t + k ∫ δ ( τ ) dτ   –∞ 31 = cos k, t = 0 + For the delayed impulse input x ( t ) = δ ( t – t 0 ) , Eq. (1) yields t y″ ( t ) = cos  2π f c t + k ∫ δ ( τ – t 0 ) dτ   –∞ + = cos ( 2π f c t 0 + k ) , t = t0 Recognizing that y′ ( t ) ≠ y″ ( t ) , it follows that the system is time-variant. 1.86 For the square-law device 2 y(t ) = x (t ) , the input x ( t ) = A 1 cos ( ω 1 t + φ 1 ) + A 2 cos ( ω 2 t + φ 2 ) yields the output 2 y(t ) = x (t ) = [ A 1 cos ( ω 1 t + φ 1 ) + A 2 cos ( ω 2 t + φ 2 ) ] 2 2 2 = A 1 cos 2 ( ω 1 t + φ 1 ) + A 2 cos 2 ( ω 2 t + φ 2 ) + 2 A 1 A 2 cos ( ω 1 t + φ 1 ) cos ( ω 2 t + φ 2 ) 2 A1 = ------ [ 1 + cos ( 2ω 1 t + 2φ 1 ) ] 2 2 A2 + ------ [ 1 + cos ( 2ω 2 t + 2φ 2 ) ] 2 + A 1 A 2 [ cos ( ( ω 1 + ω 2 )t + ( φ 1 + φ 2 ) ) + cos ( ( ω 1 – ω 2 )t + ( φ 1 – φ 2 ) ) ] The output y(t) contains the following components: 1.87 2 2 • DC component of amplitude ( A 1 + A 2 ) ⁄ 2 • Sinusoidal component of frequency 2ω1, amplitude A 1 ⁄ 2 , and phase 2φ1 • • • Sinusoidal component of frequency 2ω2, amplitude A 2 ⁄ 2 , and phase 2φ2 Sinusoidal component of frequency (ω1 - ω2), amplitude A1A2, and phase (φ1 - φ2) Sinusoidal component of frequency (ω1 + ω2), amplitude A1A2, and phase (φ1 + φ2) 2 2 The cubic-law device 3 y(t ) = x (t ) , in response to the input, 32 x ( t ) = A cos ( ωt + φ ) , produces the output 3 3 y ( t ) = A cos ( ωt + φ ) 1 3 3 = A cos ( ωt + φ ) ⋅ --- ( cos ( ( 2ωt + 2φ ) + 1 ) ) 2 3 A = ------ [ cos ( ( 2ωt + 2φ ) + ( ωt + φ ) ) 2 3 A + cos ( ( 2ωt + 2φ ) – ( ωt + φ ) ) ] + ------ cos ( ωt + φ ) 2 3 3 A A = ------ [ cos ( 3ωt + 3φ ) + cos ( ωt + φ ) ] + ------ cos ( ωt + φ ) 2 2 3 A 3 = ------ cos ( 3ωt + 3φ ) + A cos ( ωt + φ ) 2 The output y(t) consists of two components: • • Sinusoidal component of frequency ω, amplitude A3 and phase φ Sinusoidal component of frequency 3ω, amplitude A3/2, and phase 3φ To extract the component with frequency 3ω (i.e., the third harmonic), we need to use a band-pass filter centered on 3ω and a pass-band narrow enough to suppress the fundamental component of frequency ω. From the analysis presented here we infer that, in order to generate the pth harmonic in response to a sinusoidal component of frequency ω, we require the use of two subsystems: • Nonlinear device defined by • p = 2,3,4,... y(t ) = x (t ) , Narrowband filter centered on the frequency pω. p 1.88 (a) Following the solution to Example 1.21, we start with the pair of inputs: 1 ∆ x 1 ( t ) = ---u  t + --- ∆  2 1 ∆ x 2 ( t ) = ---u  t – --- ∆  2 The corresponding outputs are respectively given by 33 1 y 1 ( t ) = --- 1 – e ∆ 1 y 2 ( t ) = --- 1 – e ∆ ∆ – α  t + ---  2 ∆ – α  t – ---  2 ∆ ∆ cos ω n  t + --- u  t + ---    2 2 ∆ ∆ cos ω n  t – --- u  t – ---  2  2 The response to the input x∆ ( t ) = x1 ( t ) – x2 ( t ) is given by 1 ∆ ∆ y ∆ ( t ) = ---  u  t + --- – u  t – ---    ∆ 2 2  –α  t + ∆---  ω n ∆  ∆ 2 1 – ---  e cos  ω n t + --------- u t + -- ∆ 2 2    –e ∆ – α  t – ---  2  ω n ∆  ∆   cos ω n t – ---------- u t – ---   2   2   As ∆ → 0, x ∆ ( t ) → δ ( t ) . We also note that 1 d ∆ ∆  ----- z ( t ) = lim  --- z  t + --- – z  t – ---     dt 2 2  ∆ → 0 ∆ Hence, with z ( t ) = e system is y ( t ) = lim y ∆ ( t ) – αt cos ( ω n t )u ( t ) , we find that the impulse response of the ∆→0 d –αt = δ ( t ) – ----- [ e cos ( ω n t )u ( t ) ] dt d d –αt – αt = δ ( t ) – ----- [ e cos ( ω n t ) ] ⋅ u ( t ) – [ e cos ( ω n t ) ] ----- u ( t ) dt dt = δ ( t ) – [ – αe –e – αt – αt cos ( ω n t ) – ω n e cos ( ω n t )δ ( t ) – αt sin ( ω n t ) ]u ( t ) (1) 34 Since e – αt cos ( ω n t ) = 1 at t = 0, Eq. (1) reduces to y ( t ) = [ αe – αt cos ( ω n t ) + ω n e – αt sin ( ω n t ) ]u ( t ) (b) ω n = jα n where α n < α Using Euler’s formula, we can write – jω t jω t –α t α t n +e n e n +e n e cos ( ω n t ) = ------------------------------- = -------------------------2 2 The step response can therefore be rewritten as – ( α – α n )t 1 –( α + αn )t y ( t ) = 1 – --- ( e +e ) u(t ) 2 Again, the impulse response in this case can be obtained as dy ( t ) h ( t ) = ------------- = dt              – ( α – α n )t 1 –( α + αn )t 1 – --- ( e +e ) δ(t ) 2 =0 α + α n –( α + αn )t α – α n –( α – αn )t + ---------------- e + ---------------- e u(t ) 2 2 α 2 –α t α 1 –α t = ------ e 2 + ------ e 1 u ( t ) 2 2 where α1 = α - αn and α2 = α + αn. 1.89 Building on the solution described in Fig. 1.69, we may relabel Fig. P1.89 as follows x[n] + + Σ . y′[n] 0.5 S . + Σ y[n] + 0.5 where (see Eq. (1.117)) ∞ y′ [ n ] = x [ n ] + ∑ 0.5 k x[n – k ] k=1 and y [ n ] = y′ [ n ] + 0.5y′ [ n – 1 ] 35 (2) ∞ ∑ 0.5 = x[n] + ∞ k x [ n – k ] + 0.5x [ n – 1 ] + k=1 ∞ = ∑ 0.5 k x[n – 1 – k ] k=1 ∞ k x [ n – k ] + 0.5 ∑ 0.5 x [ n – 1 – k ] k=0 1.90 ∑ 0.5 k k=0 According to Eq. (1.108) the MEMS accelerometer is described by the second-order equation 2 2 d y ( t ) ω n dy ( t ) -------------- + ------ ------------- + ω n y ( t ) = x ( t ) (1) 2 Q dt dt Next, we use the approximation (assuming that Ts is sufficiently small) T T 1 d ----- y ( t ) ≈ ----- y  t + -----s – y  t – -----s  Ts  dt 2 2 (2) Applying this approximation a second time: 2 d y ( t ) 1 d  T s t – T -------------------y t ≈ + ---– y -----s 2    T s dt 2 2 dt T T 1 d 1 d = ----- ----- y  t + -----s – ----- ----- y  t – -----s T s dt  2  T s dt  2  11 ≈ -----  ----- [ y ( t + T s ) – y ( t ) ]  T sT s   11 – -----  ----- [ y ( t ) – y ( t – T s ) ]  T sT s  1 = -----2- [ y ( t + T s ) – 2y ( t ) + y ( t – T s ) ] Ts (3) Substituting Eqs. (2) and (3) into (1): 2 ω n  T s T 1 2 -----2- [ y ( t + T s ) – 2y ( t ) + y ( t – T s ) ] + --------- y t + ----- – y  t – -----s + ω n y ( t ) = x ( t )     QT 2 2 s Ts Define 2 ωn T s ----------- = a1 , Q 2 2 ωn T s – 2 = a2 , 36 (4) 1 -----2- = b 0 , Ts t = nT s ⁄ 2 , y [ n ] = y ( nT s ⁄ 2 ) , where, in effect, continuous time is normalized with respect to Ts/2 to get n. We may then rewrite Eq. (4) in the form of a noncausal difference equation: y [ n + 2 ] + a1 y [ n + 1 ] + a2 y [ n ] – a1 y [ n – 1 ] + y [ n – 2 ] = b0 x [ n ] (5) Note: The difference equation (5) is of order 4, providing an approximate description of a second-order continuous-time system. This doubling in order is traced to Eq. (2) as the approximation for a derivative of order 1. We may avoid the need for this order doubling by adopting the alternative approximation: d 1 ----- y ( t ) ≈ ----- [ y ( t + T s ) – y ( t ) ] dt Ts However, in general, for a given sampling period Ts, this approximation may not be as accurate as that defined in Eq. (2). 1.91 Integration is preferred over differentiation for two reasons: (i) Integration tends to attenuate high frequencies. Recognizing that noise contains a broad band of frequencies, integration has a smoothing effect on receiver noise. (ii) Differentiation tends to accentuate high frequencies. Correspondingly, differentiation has the opposite effect to integration on receiver noise. 1.92 From Fig. P1.92, we have i ( t ) = i1 ( t ) + i2 ( t ) 1 t v ( t ) = Ri 2 ( t ) = ---- ∫ i 1 ( τ ) dτ C –∞ This pair of equations may be rewritten in the equivalent form: 1 i 1 ( t ) = i ( t ) – ---v ( t ) R 1 t v ( t ) = ---- ∫ i 1 ( τ ) dτ C –∞ 37 Correspondingly, we may represent the parallel RC circuit of Fig. P1.92 by the block diagram i(t) + Σ i1(t) 1 t ---- ∫ dτ C –∞ - . v(t) 1 R The system described herein is a feedback system with the capacitance C providing the forward path and the conductance 1/R providing the feedback path. %Solution to Matlab Experiment 1.93 f = 20; k = 0:0.0001:5/20; amp = 5; duty = 60; %Make Square Wave y1 = amp * square(2*pi*f*k,duty); %Make Sawtooth Wave y2 = amp * sawtooth(2*pi*f*k); %Plot Results figure(1); clf; subplot(2,1,1) plot(k,y1) xlabel(’time (sec)’) ylabel(’Voltage’) title(’Square Wave’) axis([0 5/20 -6 6]) subplot(2,1,2) plot(k,y2) xlabel(’time (sec)’) ylabel(’Voltage’) title(’Sawtooth Wave’) 38 axis([0 5/20 -6 6]) Square Wave 6 4 Voltage 2 0 −2 −4 −6 0 0.05 0.1 0.15 0.2 0.25 0.2 0.25 time (sec) Sawtooth Wave 6 4 Voltage 2 0 −2 −4 −6 0 0.05 0.1 0.15 time (sec) % Solution to Matlab Experiment 1.94 t = 0:0.01:5; x1 = 10*exp(-t) - 5*exp(-0.5*t); x2 = 10*exp(-t) + 5*exp(-0.5*t); %Plot Figures figure(1); clf; subplot(2,1,1) plot(t,x1) xlabel(’time (sec)’) ylabel(’Amplitude’) title(’x(t) = e^{-t} - e^{-0.5t}’) subplot(2,1,2); plot(t,x2) xlabel(’time (sec)’) ylabel(’Amplitude’) title(’x(t) = e^{-t} + e^{-0.5t}’) 39 x(t) = e−t − e−0.5t 5 Amplitude 4 3 2 1 0 −1 0 0.5 1 1.5 2 2.5 time (sec) 3 3.5 4 4.5 5 3 3.5 4 4.5 5 x(t) = e−t + e−0.5t Amplitude 15 10 5 0 0 0.5 1 1.5 2 2.5 time (sec) % Solution to Matlab Experiment 1.95 t = (-2:0.01:2)/1000; a1 = 500; x1 = 20 * sin(2*pi*1000*t - pi/3) .* exp(-a1*t); a2 = 750; x2 = 20 * sin(2*pi*1000*t - pi/3) .* exp(-a2*t); a3 = 1000; x3 = 20 * sin(2*pi*1000*t - pi/3) .* exp(-a3*t); %Plot Resutls figure(1); clf; plot(t,x1,’b’); hold on plot(t,x2,’k:’); plot(t,x3,’r--’); hold off xlabel(’time (sec)’) 40 ylabel(’Amplitude’) title(’Exponentially Damped Sinusoid’) axis([-2/1000 2/1000 -120 120]) legend(’a = 500’, ’a = 750’, ’a = 1000’) Exponentially Damped Sinusoid a = 500 a = 750 a = 1000 100 Amplitude 50 0 −50 −100 −2 −1.5 −1 −0.5 0 time (sec) % Solution to Matlab Experiment 1.96 F = 0.1; n = -1/(2*F):0.001:1/(2*F); w = cos(2*pi*F*n); %Plot results figure(1); clf; plot(n,w) xlabel(’Time (sec)’) ylabel(’Amplitude’) title(’Raised Cosine Filter’) 41 0.5 1 1.5 2 −3 x 10 Raised Cosine Filter 1 0.8 0.6 0.4 Amplitude 0.2 0 −0.2 −0.4 −0.6 −0.8 −1 −5 −4 −3 −2 −1 0 Time (sec) % Solution to Matlab Experiment 1.97 t = -2:0.001:10; %Generate first step function x1 = zeros(size(t)); x1(t>0)=10; x1(t>5)=0; %Generate shifted function delay = 1.5; x2 = zeros(size(t)); x2(t>(0+delay))=10; x2(t>(5+delay))=0;

42

1

2

3

4

5

%Plot data figure(1); clf; plot(t,x1,’b’) hold on plot(t,x2,’r:’) xlabel(’Time (sec)’) ylabel(’Amplitude’) title(’Rectangular Pulse’) axis([-2 10 -1 11]) legend(’Zero Delay’, ’Delay = 1.5’); Rectangular Pulse Zero Delay Delay = 1.5

10

8

Amplitude

6

4

2

0

−2

0

2

4 Time (sec)

43

6

8

10

Solutions to Additional Problems 2.32. A discrete-time LTI system has the impulse response h[n] depicted in Fig. P2.32 (a). Use linearity and time invariance to determine the system output y[n] if the input x[n] is Use the fact that:

δ[n − k] ∗ h[n]

= h[n − k]

(ax1 [n] + bx2 [n]) ∗ h[n] = ax1 [n] ∗ h[n] + bx2 [n] ∗ h[n]

(a) x[n] = 3δ[n] − 2δ[n − 1]

y[n]

=

3h[n] − 2h[n − 1]

=

3δ[n + 1] + 7δ[n] − 7δ[n − 2] + 5δ[n − 3] − 2δ[n − 4]

(b) x[n] = u[n + 1] − u[n − 3]

x[n] = δ[n] + δ[n − 1] + δ[n − 2] y[n] = h[n] + h[n − 1] + h[n − 2] = δ[n + 1] + 4δ[n] + 6δ[n − 1] + 4δ[n − 2] + 2δ[n − 3] + δ[n − 5]

(c) x[n] as given in Fig. P2.32 (b) . x[n]

=

2δ[n − 3] + 2δ[n] − δ[n + 2]

y[n]

=

2h[n − 3] + 2h[n] − h[n + 2]

= −δ[n + 3] − 3δ[n + 2] + 7δ[n] + 3δ[n − 1] + 8δ[n − 3] + 4δ[n − 4] − 2δ[n − 5] + 2δ[n − 6]

2.33. Evaluate the discrete-time convolution sums given below. (a) y[n] = u[n + 3] ∗ u[n − 3] Let u[n + 3] = x[n] and u[n − 3] = h[n]

1

x[k]

h[n−k]

… k

−3 −2 −1

n−3

k

Figure P2.33. (a) Graph of x[k] and h[n − k]

for n − 3 < −3 n E-Book Information Year: 2,002 Edition: 2 Pages: 634 Pages In File: 634 Language: English Topic: 261 Identifier: 0471164747,9780471164746 Org File Size: 4,605,950 Extension: pdf Design and MATLAB concepts have been integrated in text. * Integrates applications as it relates signals to a remote sensing system, a controls system, radio astronomy, a biomedical system and seismology.

Signals and Systems (2nd Edition)

Salvar Salvar Signals and Systems (2nd Edition) para ler mais tarde

0% 0% acharam este documento útil, Marcar esse documento como útil

0% 0% acharam que esse documento não foi útil, Marcar esse documento como não foi útil

Incorporar

Compartilhar

Signals And Systems 2nd Edition PDF Free Download

Signals and Systems 2nd edition has been written by Simon haykin. Signals and Systems 2nd edition eBook is a horrific choice when you get fresh signals and systems. It sometimes takes several pages to describe a function or way of doing something, and sometimes it introduces an explanatory function. It not only explains in depth each definition but includes a range of examples on EE and ME majors.

Signals and Systems 2nd edition PDF IS a book for graduate students interested in completely learning about signals and systems. This book should truly be the finest choice for students whose research is related to a great deal of understanding of the topic of ‘signal and system.’

Related: Free Chemical Engineering Books

Summary:

Signals and Systems 2nd edition PDF free download Features new challenges, new thematic examples, new coverage, many practical possibilities, and All examples build on actual situations and highlight the correct theoretical mathematical processes

Signals and Systems 2nd edition free download is a requirement for electrical, and biomedical students of engineering for a second or third year. This book covers the subjects necessary to comprehend signals and system topics remarkably it is compiled as though a robot should read it

Signals and Systems 2nd edition eBook The Second Edition has already incorporated further examples and problems noted for its wide sets of problems and examples. To increase clarity and organization, all chapters were updated. Similarly, all signals and systems are covered. The tables in the annex are all right. The index, content table, and other characteristics of the book are also included.

Related: Electrical Engineering Books

Download:

If you want to get Signals and Systems 2nd edition PDF free download on your smartphone or tablets contact us we can make Signals and Systems 2nd edition gets online available directly on your site with download it now! Also, check out Free Engineering Books

키워드에 대한 정보 signals and systems 2nd edition pdf

다음은 Bing에서 signals and systems 2nd edition pdf 주제에 대한 검색 결과입니다. 필요한 경우 더 읽을 수 있습니다.

이 기사는 인터넷의 다양한 출처에서 편집되었습니다. 이 기사가 유용했기를 바랍니다. 이 기사가 유용하다고 생각되면 공유하십시오. 매우 감사합니다!

사람들이 주제에 대해 자주 검색하는 키워드 [PDF] Solution Manual | Signals and Systems 2nd Edition Oppenheim \u0026 Willsky

  • SolutionsManuals
  • TestBanks
  • EngineeringBooks
  • EngineerBooks
  • EngineeringStudentBooks
  • MechanicalBooks
  • ScienceBooks
[PDF] #Solution #Manual #| #Signals #and #Systems #2nd #Edition #Oppenheim #\u0026 #Willsky


YouTube에서 signals and systems 2nd edition pdf 주제의 다른 동영상 보기

주제에 대한 기사를 시청해 주셔서 감사합니다 [PDF] Solution Manual | Signals and Systems 2nd Edition Oppenheim \u0026 Willsky | signals and systems 2nd edition pdf, 이 기사가 유용하다고 생각되면 공유하십시오, 매우 감사합니다.

Leave a Comment