close

Вход

Забыли?

вход по аккаунту

?

Measuring the small angular scale cosmic microwave background temperature anisotropy with the QUaD telescope

код для вставкиСкачать
THE UNIVERSITY OF CHICAGO
MEASURING THE SMALL ANGULAR SCALE COSMIC MICROWAVE
BACKGROUND TEMPERATURE ANISOTROPY WITH THE QUAD TELESCOPE
A DISSERTATION SUBMITTED TO
THE FACULTY OF THE DIVISION OF THE PHYSICAL SCIENCES
IN CANDIDACY FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY
DEPARTMENT OF ASTRONOMY AND ASTROPHYSICS
BY
ROBERT BRYAN FRIEDMAN
CHICAGO, ILLINOIS
AUGUST 2009
UMI Number: 3369328
INFORMATION TO USERS
The quality of this reproduction is dependent upon the quality of the copy
submitted. Broken or indistinct print, colored or poor quality illustrations and
photographs, print bleed-through, substandard margins, and improper
alignment can adversely affect reproduction.
In the unlikely event that the author did not send a complete manuscript
and there are missing pages, these will be noted. Also, if unauthorized
copyright material had to be removed, a note will indicate the deletion.
______________________________________________________________
UMI Microform 3369328
Copyright 2009 by ProQuest LLC
All rights reserved. This microform edition is protected against
unauthorized copying under Title 17, United States Code.
_______________________________________________________________
ProQuest LLC
789 East Eisenhower Parkway
P.O. Box 1346
Ann Arbor, MI 48106-1346
c 2009 by Robert Bryan Friedman
Copyright All rights reserved
For my mother,
who despite causing a few stomach aches,
never actually got us lost;
and who taught me:
there’s nothing more important in life...
than living it.
TABLE OF CONTENTS
LIST OF FIGURES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
vii
LIST OF TABLES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
x
ACKNOWLEDGMENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
xi
ABSTRACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
xv
CHAPTER
1 INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . .
1.1 Cosmology: Progress Driven by Data . . . . . . . . . .
1.2 The First 14 Billion Years in Reverse . . . . . . . . . .
1.2.1 A Bad Analogy . . . . . . . . . . . . . . . . . .
1.2.2 A Cosmic Birth Certificate . . . . . . . . . . . .
1.2.3 What’s the Matter . . . . . . . . . . . . . . . .
1.2.4 The Big Bang Reigns Supreme . . . . . . . . . .
1.2.5 The Observable Universe . . . . . . . . . . . . .
1.3 The Metric: The Fabric of Spacetime . . . . . . . . . .
1.4 Characterizing the Cosmic Microwave Background . . .
1.4.1 The Temperature Field . . . . . . . . . . . . . .
1.4.2 Anisotropies from Acoustic Oscillations . . . . .
1.4.3 Decoding the Power Spectrum . . . . . . . . . .
1.4.4 Initial Conditions from Inflation . . . . . . . . .
1.4.5 Polarization in the CMB . . . . . . . . . . . . .
1.5 Large Scale Structure: Something in the Way of Things
1.6 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
2
8
10
13
15
16
17
20
25
26
30
35
37
38
43
46
2 THE QUAD TELESCOPE . . . . . . . . . .
2.1 The Optical Assembly . . . . . . . . . .
2.1.1 The Observing Site and Mount .
2.1.2 The Optical Path . . . . . . . . .
2.1.3 The Focus Mechanism . . . . . .
2.1.4 The Pixel Array . . . . . . . . . .
2.2 The Foam Cone . . . . . . . . . . . . . .
2.2.1 Shaping The Foam . . . . . . . .
2.2.2 Assembling the Cone . . . . . . .
2.2.3 Coupling the Cone to the Mirrors
2.2.4 Shipping the Foam Cone . . . . .
2.3 The Receiver . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
49
51
52
53
55
57
59
60
64
68
68
72
iv
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2.3.1
2.3.2
The Bolometers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Cryogenics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
72
75
3 FROM SKY TO MAP . . . . . . . . . . . . . . . . . . . . . .
3.1 Observation Strategy . . . . . . . . . . . . . . . . . . . .
3.2 Low Level Data Processing . . . . . . . . . . . . . . . . .
3.2.1 Deconvolution . . . . . . . . . . . . . . . . . . . .
3.2.2 Relative Gain Calibration . . . . . . . . . . . . .
3.2.3 Long Term Gain Stability . . . . . . . . . . . . .
3.2.4 Noise Properties . . . . . . . . . . . . . . . . . . .
3.3 Mapmaking . . . . . . . . . . . . . . . . . . . . . . . . .
3.3.1 Formalism . . . . . . . . . . . . . . . . . . . . . .
3.3.2 Telescope Pointing . . . . . . . . . . . . . . . . .
3.3.3 Ground Template Removal . . . . . . . . . . . . .
3.3.4 Putting It All Together . . . . . . . . . . . . . . .
3.3.5 Absolute Calibration . . . . . . . . . . . . . . . .
3.3.6 Jackknife Maps . . . . . . . . . . . . . . . . . . .
3.4 Point Source Identification . . . . . . . . . . . . . . . . .
3.4.1 Two Dimensional Fourier Plane Optimal Filtering
3.4.2 Signal-to-Noise Cleaning . . . . . . . . . . . . . .
3.5 Map Simulation . . . . . . . . . . . . . . . . . . . . . . .
3.5.1 Signal Simulations . . . . . . . . . . . . . . . . .
3.5.2 Noise Simulations . . . . . . . . . . . . . . . . . .
3.6 Simulating Point Source Foregrounds . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
78
80
83
83
85
87
87
92
92
94
95
97
99
101
103
103
108
110
111
112
113
4 THE BEAM MODEL . . . . . . . .
4.1 Beam Calibration Observations
4.2 Beam Stability . . . . . . . . .
4.3 Main-Lobe Parameters . . . . .
4.3.1 From Rowcals . . . . . .
4.3.2 From Beamruns . . . . .
4.4 Near Sidelobes . . . . . . . . .
4.5 Array Parameters . . . . . . . .
4.6 Beam Uncertainty . . . . . . . .
4.7 Far Sidelobes . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
118
121
124
128
128
132
142
146
153
156
5 THE HIGH-` TT POWER SPECTRUM . . . .
5.1 Processing the Spectra . . . . . . . . . . .
5.1.1 Appodization and Source Masking
5.1.2 Fourier Plane Weighting/Masking .
5.1.3 Bandpowers . . . . . . . . . . . . .
5.1.4 Noise Subtraction . . . . . . . . . .
5.1.5 Bandpower Window Functions . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
160
160
161
162
164
169
169
v
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5.2
5.1.6 Filter/Beam Correction . . . . . . . . . . . .
5.1.7 Final Bandpowers and the Covariance Matrix
The High-` TT Spectra . . . . . . . . . . . . . . . . .
5.2.1 High-` TT Bandpower Results . . . . . . . . .
5.2.2 Agreement with Simulation . . . . . . . . . .
6 A COSMOLOGICAL CONTEXT . . . . . . . . . . .
6.1 SZE Secondary Anisotropy Spectra . . . . . . .
6.1.1 Analytic Estimates . . . . . . . . . . . .
6.1.2 Simulations . . . . . . . . . . . . . . . .
6.1.3 Spectra Properties . . . . . . . . . . . .
6.2 Determining a Best-Fit SZ Template Amplitude
6.2.1 Naive Results . . . . . . . . . . . . . . .
6.2.2 Offset Log-Normal Transformation . . .
6.2.3 Scaling Signal Variance . . . . . . . . . .
6.2.4 Final Results and Implications for σ8 . .
6.3 Consistency with Recent High-` Experiments . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
171
172
176
177
182
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
184
184
186
188
189
194
198
201
203
204
208
7 CONCLUSIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
REFERENCES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
vi
LIST OF FIGURES
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
1.10
1.11
1.12
1.13
The Flammarion Woodcut . . . . . . . . . . . . . .
The Milky Way . . . . . . . . . . . . . . . . . . . .
The Great Andromeda Nebula . . . . . . . . . . . .
Herschel’s Milky Way . . . . . . . . . . . . . . . . .
The Original Hubble Diagram . . . . . . . . . . . .
An Illustration of the Evolution of the Universe . .
The Millenium Simulation . . . . . . . . . . . . . .
Simulation Compared to Observation . . . . . . . .
A Full-Sky CMB Map from WMAP . . . . . . . . .
The WMAP 5 yr Power Spectrum . . . . . . . . . .
The Annotated CMB Power Spectrum . . . . . . .
Schematic Representation of Quadrupole Thompson
CMB Polarization Power Spectra . . . . . . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
Scattering
. . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
3
4
6
8
11
17
18
26
28
35
39
42
2.1
2.2
2.3
2.4
2.5
2.6
2.7
2.8
2.9
2.10
2.11
2.12
2.13
2.14
2.15
2.16
2.17
2.18
Aerial Photograph of QUaD . . . . . . . . . . .
Schematic of the QUaD Telescope . . . . . . . .
Panoramic View of the South Pole Site . . . . .
The QUaD Optical Path . . . . . . . . . . . . .
The Focus Mechanism . . . . . . . . . . . . . .
The Feed Horn Assembly. . . . . . . . . . . . .
Diagram of the Focal Plane Array . . . . . . . .
The Foam Sheet Oven . . . . . . . . . . . . . .
The Foam Sheet Oven-form . . . . . . . . . . .
The Foam Sheet Cutting Process . . . . . . . .
The Foam Cone Assembly Strategy . . . . . . .
The Foam Cone Lateral Trim . . . . . . . . . .
The Foam Cone Coupling Pieces . . . . . . . . .
Shipping the Foam Cone . . . . . . . . . . . . .
The Foam Cone on the Telescope at Pole . . . .
Scanning Electron Micrograph of a Single PSB.
The QUaD Cryostat . . . . . . . . . . . . . . .
The QUaD Receiver Core . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
49
50
52
54
56
57
58
61
63
64
65
66
67
70
71
73
75
77
3.1
3.2
3.3
3.4
3.5
3.6
3.7
3.8
The CMB Observation Field . . . . . . . . .
The CMB Observation Scan Pattern. . . . .
Detector Transfer Functions . . . . . . . . .
Elevation Nod Relative Gain Calibration . .
Internal Long Term Gain Calibration Source
Gain Suppression Correction Factor . . . . .
Timestream Data . . . . . . . . . . . . . . .
Timestream Noise Spectra . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
79
81
84
86
88
89
90
91
vii
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3.9
3.10
3.11
3.12
3.13
3.14
Temperature, deck-jackknife and variance maps. . . . . . . . .
Two-Dimensional Fourier Space Optimal Point-Source Filters .
Fourier Space Optimal Point-Source Filter Radial Profiles . . .
An Optimally Filtered Map . . . . . . . . . . . . . . . . . . .
Signal-To-Noise Map Pixel-Distribution . . . . . . . . . . . . .
Radio source number counts and point source simulation input
4.1
4.2
4.3
4.4
4.5
4.6
4.7
4.8
4.9
4.10
4.11
4.12
4.13
4.14
4.15
4.16
4.17
4.18
4.19
4.20
4.21
4.22
4.23
The WMAP W -band Extragalactic Point Source Catalog . . . . .
Main-Lobe Width Temperature Dependence . . . . . . . . . . . .
Beam Stability Through All Epochs . . . . . . . . . . . . . . . . .
Rowcal Gaussian Blip-fit . . . . . . . . . . . . . . . . . . . . . . .
RCW38 Co-Added Beamrun Map . . . . . . . . . . . . . . . . . .
Pre-Refocus RCW38 Individual Channel Beamrun Maps . . . . .
Post-Correction PKS0537-441 Individual Channel Beamrun Maps
Polynomial Filtering in PKS0537-441 Beamrun Maps . . . . . . .
PKS0537-441 Beamrun TOD Noise Correlation Function . . . . .
PKS0537-441 Beamrun TOD Noise Power Spectrum . . . . . . . .
PKS0537-441 Beamrun Map and Pixel Covariance . . . . . . . . .
PKS0537-441 Beamrun Derived Parameters . . . . . . . . . . . .
Beam Physical Optics Model Prediction . . . . . . . . . . . . . .
PKS0537-441 Beamrun TOD Radial Profiles . . . . . . . . . . . .
The Empirical Beam Sidelobe Model . . . . . . . . . . . . . . . .
Channel Array Offsets . . . . . . . . . . . . . . . . . . . . . . . .
Array Parameter Stability . . . . . . . . . . . . . . . . . . . . . .
Array Parameter Temperature Dependence . . . . . . . . . . . . .
PSB Pair Pointing Offsets . . . . . . . . . . . . . . . . . . . . . .
PKS0537-441 beamrun derived beam parameters with errors. . . .
Schematic of All Known QUaD Far-sidelobes. . . . . . . . . . . .
Maps of the Inner Circular and Radial Sidelobes. . . . . . . . . .
Maps of the Outer Circular Sidelobe. . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5.1
5.2
5.3
5.4
5.5
5.6
5.7
5.8
5.9
5.10
5.11
5.12
Fourier Plane Weights . . . . . . . . . . . . . . . . . . . .
Two Dimensional Auto Power Spectra . . . . . . . . . . .
Raw Spectra Without Point-Source Masking . . . . . . . .
Raw Spectra With Point Source Masking . . . . . . . . . .
The Bandpower Window Functions . . . . . . . . . . . . .
Filter/Beam Transfer Function . . . . . . . . . . . . . . .
Final Signal and Jackknife Bandpowers (Simulated) . . . .
Distribution of Simulated Bandpower Deviations . . . . . .
Bandpowers Deviations . . . . . . . . . . . . . . . . . . . .
High-` TT Bandpower Results . . . . . . . . . . . . . . . .
High-` Bandpower Results vs. the Simulation Distribution
High-` TT Bandpower Deviations . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
viii
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. . . .
. . . .
. . . .
. . . .
. . . .
model
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
100
105
106
107
109
115
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
123
126
127
131
132
134
135
136
140
141
142
143
144
145
146
148
149
150
151
155
157
158
159
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
163
165
167
168
170
172
174
175
176
179
180
183
6.1
6.2
6.3
6.4
6.5
6.6
6.7
6.8
6.9
6.10
The CMB Spectral Distortion due to the SZE . . . . . . . . . . . . . .
The Frequency Dependence of the SZE . . . . . . . . . . . . . . . . . .
Simulated SZE Power Spectra vs. an Analytic Estimate . . . . . . . . .
SZE Power Spectra from SPH and MMH Simulations . . . . . . . . . .
Expected SZ Power for Simulated SZA Fields . . . . . . . . . . . . . .
SZE Template Amplitude Likelihood Distribution: Naive Results . . . .
Bandpower Excess and Covariance for SZE Template Fitting . . . . . .
SZE Template Amplitude Likelihood Distribution: Rescaled Covariance
SZE Template Implied σ8 Likelihood Distribution . . . . . . . . . . . .
QUaD Bandpowers vs. Other Recent results . . . . . . . . . . . . . . .
ix
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
185
187
192
193
195
199
202
205
207
209
LIST OF TABLES
3.1
Sources Detected in 2006/07 QUaD Maps (at > 5σ) . . . . . . . . . . . . . . 111
5.1
5.2
5.3
100 GHz Bandpower Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
150 GHz Bandpower Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
χ2 Tests for Bandpowers vs. the Simulation Distribution . . . . . . . . . . . 182
6.1
6.2
6.3
SZE Power Spectra Estimation References . . . . . . . . . . . . . . . . . . . 190
ASZ Best-Fit Values: Naive Likelihood . . . . . . . . . . . . . . . . . . . . . 200
ASZ Best-Fit Values for All Likelihood Methods . . . . . . . . . . . . . . . . 206
x
ACKNOWLEDGMENTS
The work presented here is the culmination of a goal that first took shape at least ten years
ago while I was still an undergraduate student. Ten years is a long time — just under a third
of my entire life — during which I have met so many people who have guided, encouraged
and supported me. It’s nearly impossible to acknowledge them all here or even know where
to begin, so, like all the best stories, let me begin at the beginning.
Though astronomy and astrophysics have been a passion of mine since I was checking
out books on planets from the Edison Public Library on Grove Avenue in elementary school
or tutoring my friends in Mr. Jordan Burkey’s physics class at Wardlaw-Hartridge, it wasn’t
until my freshman year of college (enrolled as a business major) that I ever considered being
a “scientist”. Business classes were an utter bore and it was my roommate Jeff Munro who
convinced me that I liked my physics class more because it was challenging and I was good
at it. My thanks go out to friends like Jeff, John McClain and Ken Tsoi who helped me
figure out all those tough mechanics problems, study for exams and eat tons of Chinese BBQ
pork ribs.
Still, it was my undergraduate advisor Jack Hughes who really solidified it for me — one
of my few real role models. Jack is one of the greatest people I have ever met: an encouraging
advisor, brilliant scientist, committed professor but most of all a great person and friend. I
will never in my life forget the opportunity he gave me to go to Chile and observe at Cerro
Tollolo in my junior year (and now I will forever be envious of my observer friends). If for
nothing else, thank you Professor Hughes for all the time you spent proofreading and helping
improve my University of Chicago statement of purpose!
Much of my graduate school experience has been defined by my many great friendships.
I couldn’t have made it through my first year without Vassilis, Anibal, Justin and Felipe
who pulled all those all-nighters and watched a couple of terrible movies with me — when
xi
not busy solving impossible quantum problems of course. Another group of guys I’m proud
to call friends: Matt, Chris and Sam, didn’t help me get any work done, but they sure as
hell kept my spirits high. As did my good friends Matt Sharp and Rafael Jarmillo — often
with a flask full of spirits in hand — who also set the success bar very high (and likewise sat
through some terrible movies with me too). Last but not least there’s Ivo, who was nothing
less than a confidant; my Doc Films companion and the best damn swimming partner a
dude could ever ask for: even if the coach thought we were gay lovers (which we’re not).
Much of my time at the University of Chicago was devoted to education and outreach
in classrooms or planetariums, at science camps or national teacher conventions. These
experiences have fundamentally shaped my career path and would not have been possible
without the help of Randy Landsberg. I thank him for creating these opportunities and
always pushing me to produce the highest quality materials possible. Along this vein I’d also
like to mention Sarah Hansen, a very good friend, who was a frequent outreach collaborator
and was so very gracious to let me publish our work. Of course Reid was almost always
involved too; thanks for also being a great roommate, friend and hopefully someone I can
bounce my ideas off of (and mooch fresh picked fruit from) for a long time to come. For
that matter, big-ups to the entire Yerkes Institute crew from all five years and to those still
involved, particularly Chris Thom, one of the most genuine and open people I know.
Working for QUaD has often been a lonely venture for me, but only because the very
excellent collaboration was spread so far across the globe. I greatly appreciate their trust
in me to deliver an important result on their behalf, their comments, suggestions and encouragement during my efforts and for hosting several great collaboration meetings. Not to
mention for granting me the unique and unforgettable opportunity to travel to the South
Pole and work on the telescope. I wanted to thank Tom Culverhouse for walking down that
desolate road with me — at least for a time — and for more-or-less tolerating an office
full of elementary school teaching supplies for many years. Dan Marrone wasn’t officially a
xii
collaborator, but I would be utterly remiss to not acknowledge him for fielding all my naive
questions and providing much needed insight and encouragement; Dan is awesome.
Clem Pryke is the most no-nonsense person I have ever come across in my life. He has
the unique ability to get to the bottom of any problem like a stone in water and he will
never assert anything that he cannot back up with a well reasoned argument; likewise, he
always expects the same of others. I learned a great deal from him about the process of
scientific inquiry and the importance of scientific integrity. These lessons might be distilled
into just three important points: be precise, keep it simple and check your work; however
obvious, I still find them a challenge. Moreover, I am very grateful for his continued support
throughout six long years, in spite of my level of output or future goals. While he and I have
often had our differences, I have grown unquestionably with him as an advisor. Clem may
have not always been nice, but he has always been fair and honest and I respect him greatly
for it.
I also wanted to thank my thesis committee, who are certainly worth mention. Though
he may not even remember, Stephan Meyer single handedly reshaped my understanding of
cosmology in the 15 minutes we spoke during my first visit to the University of Chicago
when he suggested I think about the FIRB when I was just barely coming to understand the
CMB. Scott Doddleson — a natural teacher — was one of the most supportive professors I
ever had, even making me feel great about what was probably just a mediocre lensing paper
for his class. During the committee meetings I have always felt that he had my best interests
in mind first. Finally, it was John Carlstrom who inspired my application to the University
of Chicago in the first place, having devoted my entire personal statement to his work and
he sparked my interest in cosmology with the Sunyaev-Zel’dovich effect: an essential part of
this thesis.
Finally, I could not finish without thanking my family, which as they know I often define
broadly. Thank you to my greatest friends Avinash Karnani, who is the wellspring of my
xiii
ego and will one day hopefully employ me, and Ben Das, who always tries to keep me to
my word. Thank you to my older brother Misha for setting up my first computer and being
my biggest South Pole blog fan and my little sister Jenya, who never stops being proud of
me even if I drive her crazy. I’ve already dedicated this to my mother but I wish I could
also thank my father, who even in his passing served to make me who I am and make this
possible. I hope that he would appreciate it.
The person who I’d like to thank most of all is Zarah; this thesis is probably more a gift
to her than to anyone else! She keeps me real and has (mostly) kept me from going nuts
while I was writing this. Zarah is the only person who can manage to both make sure that
I am always proud of myself and my accomplishments and yet, never too full of myself. She
is both the ground beneath my feet and the guiding star in my heavens.
xiv
ABSTRACT
I present measurements of the cosmic microwave background (CMB) radiation temperature
anisotropy using the QUaD telescope, a several arcminute resolution bolometric polarimeter
operating at 100 and 150 GHz, located at the South Pole. The results presented here are
targeted to the multipole range 2000 < ` < 3000 using data from QUaD’s second and third
observing seasons. After masking the brightest point sources in the maps the results are
consistent with the primary ΛCDM expectation alone. I further estimate the contribution
of residual (un-masked) radio point sources using a model calibrated to our own bright
source observations, and a full simulation of the source finding and masking procedure.
Including this contribution slightly improves the χ2 . I then fit a standard SZ template to
the bandpowers and see no strong evidence of an SZ contribution, which is as expected for
σ8 ≈ 0.8; the best-fit value for the template amplitude is ASZ = 1.2+1.2
−1.1 .
xv
CHAPTER 1
INTRODUCTION
Figure 1.1: The Flammarion woodcut is an anonymous wood engraving once believed to be
of medieval origin but later identified to most likely have originated in the nineteenth century.
Its name is derived from its first documented appearance is in Camille Flammarion’s 1888
book L’atmosphre: mtorologie populaire. It is now a somewhat clichéd icon in astrophysics
and cosmology texts used by authors attempting to appear sophisticated or wise, such as
this author of course.
1
1.1
Cosmology: Progress Driven by Data
As we each orchestrate our own, busy, stressful lives from day-to-day, we are in fact on the
surface of a massive ball of metal and rock 8,000 miles in diameter (the Earth) speeding
around its axis at 800 mph1 as it completes its rotation once per day. The Earth travels
along its orbit at an astounding 67 thousand mph around a sphere of predominantly hydrogen
and helium gas (the Sun) a million times its size — with a surface temperature of 5780 K
— 93 million miles away. The Sun itself is cruising through space at 500 thousand mph,
in an orbit around the dense core of a collection of hundreds of billions of companion stars
assembled together by gravity into a disk 600 million-billion (6×1017 ) miles or 100 thousand
light-years2 (lys) across that we affectionately call the Milky Way Galaxy (see Figure 1.2).
At least two smaller galaxies (the Large and Small Magellanic clouds) are also orbiting
the Milky Way just at its outskirts. A third (Andromeda, Figure 1.3) rushes towards us
on an eventual collision course at a distance of only twenty times the Milky Way diameter
while yet a fourth (The Triangulum) can be found trailing just behind. This collection of
relatively small galaxies and their satellites is aptly referred to as the Local Group. Beyond
the Local Group exist billions more galaxies, some thousands of times more massive than the
Milky Way and many clustered into collections of hundreds, or more. Bewilderingly, these
galaxies appear on average, to be rushing away from us in all directions, moving faster the
further they are; a consequence of our expanding universe.
As uncontroversial and cliched as that may seem to you now (at least if you’re a fellow
cosmologist), the historical development is remarkable to consider. It’s surprising how many
prescient concepts were introduced hundreds of years ago but languished for much of history
due to a lack of data. When that data came — almost always tied to technological advances
1. If your busy life is situated in Chicago. If you find yourself at the poles, then you’re only spinning
around in place.
2. One light-year (ly) is the distance light travels in a single year.
2
Figure 1.2: The Milky Way as seen from Stagecoach, Colorado, USA. The planet Jupiter
is visible as the bright spot in the center of the frame. Image Credit: Jimmy Westlake
(http://faculty.coloradomtn.edu/jwestlake/)
— cosmology flourished. Thus the overarching narrative of the development was a constant
need for more data to quench ever more sophisticated models.
At least in the Western context, Nicolas Copernicus is often credited with the development of the heliocentric model in the mid-sixteenth century; a paradigm shift from the
prevailing geocentric model of the time rooted in the traditions of Aristotle (c. 350 BC) and
Ptolemy (c. 150 AD). Yet the heliocentric model can also be dated back to at least 270
BC to the work of Aristarchus, or perhaps even further back to Pythagoras around 550 BC
who postulated a system moving about a central fire, but failed to identify it as the Sun.
Pythagoras placed the central fire at the heart of the universe based simply on a belief that
3
Figure 1.3: The Great Andromeda Nebula, now known to be a typical spiral galaxy, likely
similar to the Milky Way in appearance and also our nearest neighbor. Image Credit: Robert
Gendler (robgendlerastropics.com)
fire was superior to earth, wind or water whereas Aristarchus used geometrical estimates of
the size of the sun to determine that it was hundreds of times the size of the earth and assert
that it was thus less likely to be moving.
At the time of its printing, the Copernican model wasn’t even an improvement on the
geocentric model, it could not predict the motions of the planets with any more accuracy
than the convoluted system of epicycles. It was Galileo’s observations of the moons of Jupiter
in 1610 that first really called into question the geocentric model, at least in a philosophical
sense by identifying an example system of orbiting bodies not bound to the Earth. Another
4
nail in the geocentric coffin was his subsequent observations of the phases of Venus, direct
evidence that Venus orbited the Sun and not the Earth. Of course these key observations
could only have been made after the advent of the telescope. All previous astronomical
observations from the ancient Greek philosophers to Copernicus himself were made using
the naked eye, which limited them to at most 2,000 stars at a resolution of no better than
one-arcminute and a limiting magnitude of . 6.5.
Even after Galileo and his telescope, geoncentrism was only replaced with hybrid geoheliocentrism, most ardently supported by the likes of Tycho Brahe. Now everything besides
the Earth and the Moon revolved around the Sun but the Sun still revolved around the
Earth; there was no compelling reason to believe otherwise. To break from the geocentric
model for good required definitive evidence for the motion of the Earth itself with respect
to the seemingly fixed celestial sphere. This came with the measurement of a two-arcsecond
stellar aberration from James Bradley in 1725. He correctly explained that the finite speed
of light and the motion of the Earth around the Sun coupled to produce the apparent motion
of stars in the sky that was independent of their distance from Earth.
Heliocentric, geocentric or otherwise all of these models shared a common postulate, that:
beyond the solar system at a fixed distance lay an eternally unchanging spherical shell of
stars, which marked the boundary of the universe; a prevalent assumption throughout the
ages. For a geocentric model this concept worked quite simply: the sphere was set rotating
about the Earth. For a heliocentric model the sphere need only be set far enough away to
evade a measurement of parallax, the distance-dependent, apparent shift in the position of
objects due to the motion of the observer.
As early as the 15th century Nicolaus Cusanus had already postulated an infinite, decentralized universe with no special place for the Earth, inspiring the likes of Giordano Bruno
and Johannes Kepler who cited him specifically in his work in the 16th century. Bruno
fully rejected geocentrism and the celestial sphere, believing that the Sun was no different
5
Figure 1.4: William Herschel’s depiction of the Milky Way using the distribution of stars
on the sky; the Sun is assumed to be near the center of the diagram. Figure from Herschel
(1785)
than the stars, which were distributed randomly throughout the universe and were orbited
by their own planets; he was later burned on the cross. The definitive word on the matter
didn’t come until 1838 when Fredrich Bessel was finally the first to measure a stellar parallax
for 61 Cygni at 0.3 arcseconds; the celestial “glass ceiling” of the universe was shattered.
All along, the nature of the Milky Way — a moniker coined by the ancient Greeks —
was yet another subject of speculation and debate. Aristotle believed it to be a nebulous
atmospheric cloud, though his predecessors had already suggested that it was just a dense
collection of stars in the celestial sphere (Aristotle generally had a bad astronomical track
record). Again it was Galileo who is credited for resolving the issue, his observations with
the telescope were able to distinguish many individual stars in the Galaxy. Later in the
mid-18th century Imannuel Kant was made famous in the field of astronomy for envisioning
the Milky Way as a collection of stars like the Sun, orbiting together in a disk as a galaxy,
6
just one of many in the universe. At the end of the eighteenth century — even before the
discovery of parallax — William Herschel (1785) attempted to determine the shape of the
Milky Way and the Sun’s position within it by counting the non-uniform distribution of stars
on the sky; doing a remarkably bad job (see Figure 1.4).
By the end of the 19th century, the identification of many unresolved nebulae led astronomers to conjecture that perhaps these were actually Kant’s “island universes”. This
came to climax at the famous debate in 1920 between Harlow Shapley, arguing that the
Milky Way was the entirety of the universe and Heber Curtis arguing that many of the nebulae were in fact external galaxies populating a much greater universe. Both men were well
reasoned and held impressive credentials. Shapley was no fool, he had completed a great
deal of wor measuring the distances to globular clusters within the Galaxy using Cephied
variable stars. Curtis was fueled by the discovery that bright novae in the “Great Andromeda
Nebula” were systematically ∼ 10 magnitudes fainter. The resolution came soon after from
the observations of Edwin Hubble, a University of Chicago graduate, who like Galileo before
him with the Milky Way, was able to distinguish individual stars in the Andromeda galaxy.
He was used the same Cephied variable stars studied by Shapley to estimate a distance to
Andromeda that greatly exceeded Shapley’s estimate of the size of the Milky Way.
Hubble didn’t stop there, his systematic study of galaxies in the early 20th century led to
an incredible conclusion. He found that on average all light leaving galaxies was “redshifted”
and that the magnitude of the redshift was dependent on the distance to the galaxy (see
Figure 1.5). Traditionally — drawing from the physics of the 19th century — a redshift
is caused by the motion of the light-emitting object away from the observer, akin to the
doppler shift of sound. By some remarkable coincidence, Albert Einstein had just published
a new theory of General Relativity in 1912, which offered an alternative explanation. It
was not the galaxies that were rushing away from us but rather space itself that was in fact
expanding between the galaxies and us, distorting the spectral properties of the light. These
7
Figure 1.5: Edwin Hubble’s diagram from his original paper in 1929 showing that the recessional velocity v of local galaxies, as inferred from their redshift, was directly proportional
to their distance d, as inferred from Cephied variable stars observed in the galaxies. Note
the error in the velocity units on the y-axis. Figure from Hubble (1929)
were compelling new theories that required data of unprecedented precision to test them.
1.2
The First 14 Billion Years in Reverse
The previous section was a broad review of the progress in our understanding of the structure
of the Universe but neglected an equally important part of any cosmology: the origin and
evolution of the Universe. The Universe might have come into existence in an instant or it
may have existed forever; it may be eternal, it may be static or it may be cyclical. Until
only recently this question was squarely within the realm of religion and philosophy alone,
with little or no observational and physical evidence to support any conjectures. Our current
time is different, both our cosmological model and our observations point to a rather well
8
determined timeline, which I will now address.
Hubble’s observation — at the close of the previous section — was the foundational
observation to our modern cosmological model and the implications were astounding: the
Universe was found to be expanding. Lets take a moment to consider two of the simplest
but most profound implications. The first is that the Universe has a finite age — a galaxy
observed from Earth with an apparent recessional velocity v at a distance d, was actually
located at some time t = d/v in the past at the position of the Earth.3 Moreover, since
they are all receding (with a velocity that is linearly proportional to their distance), all
galaxies would have been at the same location at that same time t in the past. The second
implication is that the Universe wasn’t always the cold dark place we now find ourselves
in, which can be seen as a direct result of the ideal gas law p = nkT , where p is the gas
pressure, n is the number density and T is the temperature. An expanding Universe can
be considered as adiabatically cooling, whereby as the distribution of matter dilutes the
decrease in density results in a decrease in pressure and temperature. Thus in the past, the
Universe must have been a hotter, denser and generally less comfortable place. Point being,
all the big Aristotelian talk of an infinite, eternal and static Universe gets thrown right out
the window.
So if we were to run the cosmic clock backwards, the Universe would heat up as it contracted until all the matter was coincident; this initial, hot, dense state is what is commonly
referred to as the Big Bang. Yet, the term is a misnomer: it connotes an explosive event,
which occurred at some distinct location, spewing forth matter into a primordial pre-existing
space. In fact it is not matter that is exploding outwards but rather, as bears repeating, it
is space itself which is expanding, dragging matter along for the ride. Moreover, space is not
expanding into anything else nor is any new space being created. Rather, the actual distance
3. This relationship is typically written as v = H0 d, where H0 is referred to as Hubble’s constant. The
inverse H0−1 is the approximate age of the Universe, also called a Hubble time.
9
between two points in space is simply increasing (yeah I know, nutty). Finally, and perhaps
most importantly, this hot, dense primal Universe wasn’t located in some special place, it
was everywhere at once. The Universe at the time of the Big Bang is the same Universe
today, it’s just a lot colder and less dense now.
1.2.1
A Bad Analogy
Imagine it this way: the Universe today is a very cold party at the end of the night when
most of the tired guests are chatting in little groups (but nobody’s gone home yet), let’s say
3 AM. I know, it’s a terrible analogy... but bear with me. Winding the clock back to the
peak of the party, you pass through the events of the night until it’s 1 AM. The music is
loud and everyone’s dancing away; the windows are fogged, the energy is electric and you’d
even say the place was glowing. Then at 2 AM the cops show up. The music is turned off
and everyone is dazed and confused; still buzzing from the excitement, people keep shuffling
around and bumping into each other but the magic is gone. Some people gravitate to the
kitchen to scoop up the last of the dip, some to the porch to hang around the keg, and a few
stay on the dance floor chatting away.
In reality, the same thought experiment applied to the actual Universe goes a little more
like this.4 As you run the clock back nothing happens for a very long time, the Universe
10 billion years ago looked a lot like it does now (the Earth itself is dated to ∼ 4 billion
years). The most noticeable feature would be a dramatic rearrangement of the galaxies as
a long history of collisions and mergers is undone: the Milky Way would deconstruct into a
collection of much smaller galaxies. The life-cycles of stars within the galaxies play out in
reverse: supernova remnants condense back into old stars and the stars keep burning bright
for billions of years back, with far more of them in the recent past than even today.
4. Follow along with Figure 1.6, starting on the right and moving leftward, making sure you note that
the time axis is not linear.
10
Figure 1.6: A very not-to-scale illustration of the evolution of the Universe under the current
Big Bang model. Figure from Hu & White (2004)
At about 1 billion years after the Big Bang, the first populations of stars are all that
are left, but they are big (∼ 100 times the mass of the sun), bright and live short millionyear lives. Then suddenly the lights begin to go out, one-by-one stars darken and expand
back into the progenitor gas clouds from which they originate. Finally, short of a billion
years after the Big Bang, it’s utterly dark — at least utterly dark to the human eye. If
your eyes could see infra-red or radio light you’d probably witness the disintegration of all
the structure we recognize today; now-dark proto-galaxies would dissolve away as the work
of gravity is undone. Eventually all the gas is dispersed, more or less evenly distributed
throughout space.
11
All the while the temperature of the Universe, and the density and pressure of diffuse
gas within it, has been steadily climbing. The rise in temperature starts to accelerate as
the years count down from billions to millions and then finally at about 300 thousand years
after the Big Bang the temperature hits ∼ 3, 000 K,5 high enough to liberate electrons from
hydrogen atoms: the music goes up and the dance party is on.6 The environment at this
stage is not unlike the interior of a star. Charged particles — the now free electrons and
protons — are especially efficient at scattering radiation so any photon moving through space
gets jostled around: bounced from particle to particle.7
The effect is like a dense fog scattering headlights to the point you can’t quite tell where
the light is coming from, though in this case, the fog isn’t quite as thick. If you were around
at the time, all vision would be limited to roughly 100 lys, which on Earth at present, would
encompass no more than a thousand or so stars in the local stellar neighborhood; beyond
that, a faint white glow. The surface boundary between the clear space around you and the
white fog beyond which you cannot see is called the surface of last scattering, since this is
the place where light last bounced off the fog before entering your eye (or telescope). Of
course, nothing was around at that time except for free electrons, free protons, a smattering
of free helium nuclei and a swath of photons in a gas distributed uniformly throughout the
Universe. Well there’s actually some other important stuff I am ignoring, like dark matter
and neutrinos, but let’s not worry about that for the moment.
Moving further back in time, as the density and temperature increases, the fog closes in
around you. At about 50 thousand years after the Big Bang matter ceases to, well, matter
anymore and radiation dominates the energy of the Universe. By three minutes after the
5. The unit K is degrees Kelvin and scales just like Celsius except that 0◦ C = 273 K.
6. The exact time is more like 380,000 years after the Big Bang at a redshift of z ≈ 1100
7. Photons are particles of light and light is synonymous with electomagnetic radiation. Radio waves,
microwaves, infra-red, ultraviolet, x-rays and gamma rays are all forms of light with increasing frequency
and energy respectively. Light that can be seen by human eyes is a very particular form that we call visible.
12
Big Bang the temperature is so high (a million degrees Kelvin) that nuclei can’t even keep it
together anymore and they are disassociated, followed by the protons and neutrons, which
constitute them, soon after. At the beginning, a fraction of a second after the Big Bang,
there was light alone (and as they say, it was good).8
1.2.2
A Cosmic Birth Certificate
But, I’ve gotten a little carried away and have traveled too far back, so let’s change direction
and move forward a bit to that special moment at around 300 thousand years after the
Big Bang: it’s 1 AM and the party’s hot.9 Again, the temperature at that time was high
enough to keep atoms disassociated into free electrons and protons, which are particularly
efficient at scattering photons; the matter is thereby coupled to the radiation, like dancers
coupled to the music. The effect of this tight coupling is to keep matter from gathering under
the force of gravity, as it persistently seeks to do. As soon as a bit of matter starts to clump,
the radiation is there to withstand the collapse and drive it back out into the party.10 .
Continuing to move forward in time, the Universe cools through expansion to be below
that 3,000 K mark where suddenly the free protons and electrons combine to form neutral
atoms, which are considerably less efficient at scattering photons. It’s like the music abruptly
stopped, the matter — gone neutral — is liberated (decoupled) from the radiation and
both are free to go their separate ways. This critical event is commonly referred to as
“recombination”, though the “re-” is misleading since it is really the first time that neutral
atoms have ever existed. Of course here the party analogy fails in a minor but important
8. I’ve taken a little poetic license here. Light was not precisely alone, amongst other things, a sea of
constantly replenishing particle/anti-particle pairs filled space, which annihilated rapidly creating more light.
9. This is hardly a moment at all taking over tens of thousands of years to complete, with separate stages
for hydrogen and helium. Though that timescale can be fairly called a cosmological moment.
10. Dark matter — which I am still ignoring — is not susceptible to the whims of electromagnetic forces
and does collapse under its own self-gravity
13
way: the music of radiation isn’t actually turned off, the matter just stops listening.
This music, a “cosmic symphony” (Hu & White, 2004), is what this thesis is all about.
The distribution of the radiation, which exactly traced the distribution of matter up to this
moment, is frozen and preserved for all time as a cosmic background radiation (CBR). At
the time of recombination, the temperature of the CBR was 3,000 K — approximately the
temperature of an incandescent lightbulb filament — and would have appeared as a bright
white glow. As the Universe expanded and cooled, the temperature of the CBR steadily
dropped to its current temperature of 2.74 K and thus no longer appears as hot-white but
instead as a faint microwave glow; hence the moniker: the cosmic microwave background
(CMB) radiation.
Though its temperature dropped considerably, the statistical properties of its spatial
distribution have barely changed since most of the Universe is effectively free space.11 Thus
preserved, the CMB is in this sense a cosmic birth certificate, encoding all the information
of the early Universe. In so much as Hubble’s observations of receding redshifted galaxies
implied a Universe that evolved from a Big Bang state — by logical extension and simple
physics — it also implied the existence of a CBR.
The serendipitous detection of the CMB in 1965 by Arno Penzias and Robert Wilson
at AT&T Bell Laboratories in Holmdel, New Jersey (Penzias & Wilson, 1965) was nothing
short of a confirmation of the Big Bang hypothesis. Though they didn’t actually realize what
they had stumbled upon at first; they initially believed the signal was just unaccounted for
noise, going as far as cleaning bird droppings from their telescope in a failed attempt to
eliminate it. When they shared their findings with an ambitious group of cosmologists at
Princeton University — Robert Dicke, Jim Peebles, David Wilkinson, and Peter Roll —
who were busy developing a CBR detector of their own, the matter was settled, albeit with
11. The CMB photons are free-streaming to us from the surface of last scattering. See Section 1.2.5 for
more insight about what we actually observe.
14
cautious reservation (see Dicke et al., 1965).12
1.2.3
What’s the Matter
What happened to the matter after recombination is equally profound. Decoupled from the
radiation, the matter was now bound only to itself through the force of gravity and began
to collapse into stars and galaxies and eventually planets and people. With the formation
of stars, the hydrogen and helium created after the Big Bang were processed into all the
elements; up to iron in the stellar cores and then all the way through the rest of the periodic
table via violent stellar deaths by supernova. In fact, detailed measurements of the cosmic
abundances of these elements can be predicted from early Universe physics and they are yet
another confirmation of the Big Bang model.13
The stars formed in newly developed galaxies, which soon merged together with their
neighbors — the evolution tends to start with beautiful spiral galaxies (like the Milky Way
and Andromeda) actively forming new stars that get randomized into much larger but less
picturesque ellipticals loaded with old stars.14 On the whole, the smooth distribution of
gas at recombination, coalesces into a spiderweb of cosmic structure, interlinking massive
clusters of galaxies through long flowing filaments (see Figure 1.7). Computer simulations
that are used to model the evolution of large scale structure have accurately reproduced
the actual distributions of galaxies observed via systematic redshift studies.15 The precise
detail of this large scale structure formation, governed almost exclusively by gravity, is largely
12. For an excellent account of this iconic story see Kolb (1996)
13. Though Big Bang Nucleosynthesis (BBN), as it is called, only predicts the formation of the lightest
elements.
14. The formation of the first stars and galaxies are still developing subjects of active research.
15. These surveys are the modern version of E. Hubble’s original work. The Sloan Digital Sky Survey
(SDSS), for instance, has covered nearly a quarter of the sky and measured the redshift to almost one
million galaxies.
15
driven by the heretofore neglected dark matter, which outnumbers regular baryonic matter
(the stuff we’re all made of: the protons, neutrons, electrons, etc.) by 10-to-1. Yet dark
matter — so called because it cannot be detected except through its gravitational effects
— is required to match the simulations to the observations and, for that matter, to explain
CMB observations. Figure 1.7 actually shows the distribution of dark matter, in which the
regular baryonic matter would be embedded and Figure 1.8 compares the results of such
simulations against the real galaxy surveys.
1.2.4
The Big Bang Reigns Supreme
And so the Big Bang model reigns supreme, implied by the early observations of Edwin
Hubble, contextualized by Einstein’s general relativity and confirmed by the discovery of
the CMB. Today’s cosmology combines the many fields of physics from general relativity
to quantum mechanics, fluid mechanics and radiative transfer with sophisticated statistics
and computer simulations to make predictions about the structure and composition of the
Universe that are routinely borne out by observation.
We continue to study the CMB in fine detail, using the primordial information encoded
in it to learn about the composition, size, age and shape of the Universe. Moreover, the
cosmic structure which formed can often interact with incoming CMB photons, creating
small secondary modifications that provide a wealth of information as well.
Yet our current cosmology is not without its mysteries and caveats, be they the nature
of dark matter or the existence of a new dark energy. As our current observations reach
their technological limits we may find ourselves once more — as our ancestors before us —
confronted by unanswerable questions about the Universe we live in. Let us hope to never
again find our ideas languishing but regardless, we will undoubtedly forever be in need of
more data.
16
Figure 1.7: The millenium simulation of large scale structure at present time 13.7 Gyrs
after the Big Bang; a 124 Mpc (400 million lys) box is shown. The simulation shows the
distribution of dark matter, which dominates the mass budget of the Universe. The matter
distribution resembles a filamentary spider’s web, large clusters of galaxies form at the intersection of filaments while elsewhere, in voids, matter is scarce. Figure from Springel et al.
(2005)
1.2.5
The Observable Universe
The way of thinking about the CMB introduced above — essentially as the temperature of
the giant oven we call the Universe — is an important discernment but there’s something
missing from the picture. As mentioned earlier, the Universe today is just a colder, less
dense version of the Universe at the Big Bang or at recombination. Yet, the Universe we can
observe at any moment is restricted by the speed of light c; the cosmic speed limit for all
17
Figure 1.8: The millenium simulation of large scale structure compared against three major
redshift surveys. The agreement between simulation and observation is remarkable. Figure
from Springel et al. (2006)
18
forms of matter or radiation. Since all information is exchanged by interaction with matter
or radiation, the distance we could ever possibly see is limited by the distance light has
traveled since the beginning of time. This defines our observable Universe, and the edge of
it is referred to as the horizon — the surface of a sphere.
At recombination the Universe was 300 thousand years old, so the horizon then was
300 thousand lys in radius, whereas the surface of last scattering — the edge of the prerecombination fog — was 100 lys in radius (as discussed previously). Thus if you were around
at recombination, you’d only be seeing 1/3,000th of the potentially observable Universe,
about 0.033%, hence the fog analogy. Today, the horizon is 13.7 billion lys away and the
surface of last scattering still limits our view to it, though the limit is now 99.998% of the
potentially observable Universe. When we study the CMB we are making observations of
this last scattering surface, from a part of the Universe that was nearly ∼ 13.7 billion lys
away and we still see as stuck in the fog.16
The key notion here is that looking out into space is equivalent to looking back into time,
though not our own directly but rather the time of some comparable place in the Universe.
The essential assumption — the 1 + 1 = 2 of modern cosmology — is that the Universe
is homogeneous and isotropic. This is a fancy way of saying that the Universe is made of
the same stuff and obeys the same physics everywhere and is on average the same in every
direction, i.e. the Earth occupies no special place in the Universe; this is also known as the
cosmological principle.
Abandoning any last vestige of formality — or sanity — you can think of it all like
this. At the time of the Big Bang a knowledge bomb went off right where you stand. As
the shockwave of this bomb (the horizon) passed through the existing, expanding Universe,
16. Well, to be precise, time and distance don’t actually correlate this simply because the expansion of
the Universe always continues to increase the distances. Thus an object emitting light some time ago has
actually moved further away by the time the light gets here. It’s a complication that probably doesn’t help
elucidate any concepts here.
19
information started coming back to your location (instantaneously) about whatever the
shockwave passed, from the moment it passed it, and onwards. Right now, this horizon is
13.7 billion lys away, and 300,000 years ago it swept past the current location of the surface of
last scattering and kept going. You can’t see anything beyond that surface because, whatever
is beyond it, was still in the primordial fog when the horizon passed it. In 1 billion or so
years, telescopes might be able to see all the way back to the first stars that evolved from
what is hidden in that fog now. All the while the horizon will continue to sweep outwards,
at that point out to 14.7 billion lys from revealing a new surface of last scattering a billion
lys away from the current one.
1.3
The Metric: The Fabric of Spacetime
Space, in the most basic of terms, is a set of points and the position of an object at some
point in space is usually referred to as its coordinate. Likewise, time is also a collection of
points with coordinates. The ingenious insight of General Relativity is that a) the points
in space and time are not distinct, hence the term spacetime and b) that the motion of
objects in spacetime is severely restricted. The second statement is usually communicated
as “nothing can go faster than the speed of light in spacetime” but in fact the alternative
is more enlightening, “light always travels along the shortest path between two points in
spacetime”, which though equivalent says more about spacetime and less about light.
The first thing you might want to know about a spacetime you live in is how far it is from
one point to the next and for this you need to know the metric. This essential characteristic
of spacetime tells you about all the bumps and wiggles along the way, its what a contour
map is for a hiker; there’s no way to know how long a hike will take unless you know how
many mountains you’re going to have to cross. Given a set of coordinates xν the metric gµν
20
defines the distance, also known as the invariant interval, in that space of coordinates as
ds2 = gµν dxµ dxν
(1.1)
where summation is implied over lower and upper indices. As for light, it follows what are
known as geodesics, which are the shortest paths and are entirely determined by the metric
So for example, on a flat Euclidian plane the coordinates are simply (x, y), the metric is
given by


 1 0 
gµν = 

0 1
(1.2)
and the distance between two points is ds2 = dx2 +dy 2 , which simply recovers the Pythagorean
theorem. Geodesics in Euclidean space are simply straight lines.
In four dimensional spacetime, the interval is ds2 = −c2 dt2 + dx2 + dy 2 + dz 2 , where c is
the speed of light and can be thought of as a conversion factor between space and time. The
corresponding metric for this space, referred to as Minkowski space, is a simple extension of
the Euclidian one

 −1

 0

gµν = 
 0


0
0
0
1
0
0
1
0
0

0 

0 


0 


1
(1.3)
where for simplicity c is set to unity; this space is flat just like the Euclidean space.
For an expanding universe, the distance between the spatial coordinates is correlated with
time. The natural way to express this is ds2 = −dt2 + a2 (t)dσ 2 , where a(t) is some function
that describes the expansion of the universe and dσ 2 = γij (u)dxi dxj is the space part of
spacetime, where i and j go from 1 to 3 and γij (u) is a three dimensional metric. A little bit
21
of general relativity and algebra (Carroll, 2004) yields the familiar Robertson-Walker metric
ds2 = −dt2 + a2 (t)
"
#
dr2
2
2
+ r dΩ .
1 − κr2
(1.4)
expressed in spherical coordinates where dr corresponds to the radial direction and dΩ to
the angular direction.17 The parameter κ is related to the curvature of the universe, such
that the universe is closed for κ < 0, open for κ > 0 and flat for κ = 0, in which case γij just
goes to the Euclidian metric for three dimensions.
Armed with a metric for the expanding universe, we can now use Einstein’s equation to
derive cosmological properties for this space; Einstein’s equation can be expressed as
Gµν = 8πGTµν
(1.5)
where Gµν is the Einstein tensor, Tµν is the energy-momentum tensor and regular G is
good ole’ Newton’s gravitational constant. The left hand side of this equation is all about
the underlying properties of spacetime, i.e. the metric, whereas the right hand side is all
about the stuff which populates space (baryons, dark matter, radiation, etc.). At this point
its compulsory for me to note the beauty of this theory, how the notion of two masses
interacting via the gravitational force is totally wiped away, replaced with an “energy fluid”
interacting with the fabric of spacetime; but, it really is beautiful.18
The universe, on a large enough scale, can be modeled as a perfect fluid, in which case
17. Its too pedantic to write out the metric as a matrix every time, especially when it gets complicated,
so often only the resulting interval is expressed.
18. In the very least Einstein & Co. definitely got the language down: energy, spacetime, tensor, fabric,
etc. sound so much more pleasing than force and mass.
22
the energy-momentum tensor becomes

 ρ 0

 0 p

Tµν = 
 0 0


0 0

0 0 

0 0 

,
p 0 


0 p
(1.6)
where ρ is the density and p is the pressure. Generically, the density can be related to the
pressure as ρ = wp, where w is referred to as the equation of state and the conservation of
energy implies that ρ ∝ a−3(1+w) , where
for matter:
wm = 0
pm = 0
ρm ∝ a−3
for radiation:
wr = 13
pr = ρ3r
ρr ∝ a−4
for curvature:
wc = − 13
pc = − ρ3c
ρc ∝ a−2
for vacuum:
wΛ = −1
pΛ = −ρΛ
ρΛ ∝ a0 ,
(1.7)
for matter m, radiation r, curvature c and a cosmological constant Λ.19
Plugging in the energy-momentum tensor from equation 1.6 into the Einstein equation 1.5
using the Robertson-Walker metric in equation 1.4 results in
2
ȧ
8πG
κ
=
ρ− 2
a
3
a
(1.8)
ä
4πG
=−
(ρ + 3p)
a
3
(1.9)
and
which together are referred to as the Friedmann equations for the Friedmann-Robertson19. The parameters for the curvature are representations of the effective contribution from curvature, which
is not a form of matter or energy but rather a product of the metric.
23
Walker universe (FRW).
With these equations we can define the basic cosmological parameters which are measured by modern experiments, including QUaD. The Hubble constant that was introduced
previously is formally defined as
ȧ
H= ,
a
(1.10)
where the value today (for a = 1) is referred to as H0 = 100h km s−1 Mpc−1 , with h
typically measured to be 0.7. The interpretation of the Hubble constant in the context of
galaxy recessional velocity comes from the relationship between scale parameter and redshift.
The observed redshift of emitted light with a rest wavelength of λem is defined as
λ
− λem
z = obs
λem
(1.11)
where in electromagnetic theory the difference in wavelength is from doppler shifting. In the
case of general relativity, the wavelength is stretched by the expansion and z = 1 − a−1 or
a = (1 + z)−1 .
The density of some constituent ρx (m, r, Λ, etc.) is usually expressed with respect to
the critical density as
Ωx =
ρx
,
ρcrit
where
ρcrit =
3H02
,
8πG
(1.12)
(1.13)
which is simply equation 1.8 for a currently flat universe (κ = 0). The curvature component
is written as Ωc = − κ
2 2.
H a
Finally the comoving horizon is defined as the total comoving distance light could have
24
traveled if unimpeded since the Big Bang, calculated as
Z t
Z 1
dt0
da0
η=
=
0
0 da(t )
a a02 H(a0 )
(1.14)
which is also known as the conformal time. To determine the dependence of the Hubble
parameter on the scale factor, equation 1.8 is rewritten using the above definitions and the
density scalings from equation 1.7 such that
H(a) = H0
qX
Ωx a−3(1+wx )
(1.15)
By changing the limits of integration, equation 1.14 can also be used to determine the
distances between events during the evolution of the universe, in which case it is referred
to as the luminosity distance dL . Multiplying equation 1.14 by the scale factor yields the
angular diameter distance, dA = (1 + z)−1 dL .
This section should now serve as a reference for subsequent discussion of cosmological
parameters. As will be see, the CMB provides important information about the spatial
curvature and relative density parameters.
1.4
Characterizing the Cosmic Microwave Background
The pre-recombination, post-Big-Bang-nucleosynthesis (BBN) universe contained a tightly
coupled gas of photons, electrons and protons. The coupling was maintained by Thompson
scattering between photons and charged particles and Coulomb scattering between charged
particles. Perturbations in the density of the photon-baryon gas propagated as sound waves,
regions of compressions and rarefactions, becoming hotter when compressed and cooler when
rarefied. This gas was gravitationally influenced by concentrations of dark matter particles,
which were not coupled to it through electromagnetic forces, and were free to collapse since
25
Figure 1.9: A full-sky intensity map of the CMB made from five years of space based
observation by the Wilkinson Microwave Anisotropy Probe (WMAP), with the monopole
and dipole contributions removed. The bright central band running across the map is the
Galaxy, which is a bright foreground source at radio frequencies. The map is in an Aitoff
projection. Figure from Hinshaw et al. (2009)
the end of inflation (see Section 1.4.4). These gravitational potentials kinematically influenced the behavior of the gas. In the following sections I summarize the analysis closely
following Hu & Dodelson (2002).
1.4.1
The Temperature Field
While the coupling between the photons and baryons was strong due to a highly ionized
baryon component, the radiation was bound and unable to propagate freely. At a redshift
of ∼1100 (recombination), the electrons and protons cooled sufficiently to combine and form
neutral atoms, which no longer Thompson scattered photons. This point is usually referred
to as the surface of last scattering; from this point on-wards the photons from were able
to free-stream through space virtually unimpeded. This is the source of the basic CMB
26
observations we now make, a nearly uniform in temperature, isotropic, partially polarized
blackbody radiation field, originating from an opaque background behind which we are
unable to make any further observations.
The main observable of the CMB is the intensity fluctuation — typically expressed in
thermodynamic temperature — in some angular direction n̂ on the sky, Θ(n̂) = ∆T /T . A
full-sky intensity map of the CMB is shown in Figure 1.9, with the monopole and dipole
contributions removed (see below). We can expand Θ(n̂) into a spherical harmonic basis
given by
Z
Θ`m =
∗ (n̂)Θ(n̂).
dn̂Y`m
(1.16)
For a given scale on the sky, and multipole moment `, hΘ`m i = 0 (except for ` = 0, see
below). The variance in the temperature field at a some multipole is given by
hΘ∗`m Θ`0 m0 i = δ``0 δmm0 C` .
(1.17)
This is referred to as the temperature power spectrum and if the fluctuations are Gaussian,
then the distribution of power is fully characterized by the spectrum, which is independent
of m.
On small scales where the curvature of the sky may be neglected, the spherical harmonic
analysis reduces to standard Fourier analysis in two dimensions: the flat-sky approximation.
In this limit the multipole moment becomes the usual Fourier wavenumber where the angular
wavelength is related to the wavenumber by θ = 2π/`. Large multipoles correspond to small
angular scales, i.e. ` ∼ 102 corresponds to degree angular scales. The power spectrum is
typically reported as the power per logarithmic interval in wavenumber
∆2T ≡
`(` + 1)
C` T 2 ,
2π
27
(1.18)
Figure 1.10: The WMAP 5 yr temperature power spectrum result, which made an exquisite
measurement of the power spectrum up to multipoles of ` ∼ 1000 or ∼ 10 arcminutes. The
points are plotted with errorbars indicating the measurement uncertainty from instrumental
noise whereas the grey band shows the limiting cosmic variance. The WMAP experiment
was able to observe the entire 4π steradians of sky from space but covering so much sky, and
a quarter-degree resolution, resulted in shallow noisy maps at small angular scales. Thus the
noise errorbars become very large on points at higher multipoles (smaller angular scales).
The red line is a best-fit theoretical model. Figure from Nolta et al. (2009)
with units of µK2 .20
Since we are limited to sampling only one universe the number of modes for a given
20. Throughout the rest of this thesis C` is assumed to implicitly incorporate the temperature T 2 .
28
multipole moment ` is limited to 2` + 1, this leads to an intrinsic uncertainty on C` given by
r
∆C` =
2
C .
2` + 1 `
(1.19)
This is commonly referred to as the cosmic variance and fundamentally limits the information
available from the CMB.
The first multipole (` = 0) is referred to as the monopole and corresponds to average
power over the whole sky, which is simply determined by the local CMB temperature of
TCM B = 2.725 K. This measurement immediately provides the redshift to the surface of
last scattering since the temperature of recombination is set by basic atomic physics and the
temperature of a photon field is directly proportional to the scale-factor:
T ∝
hc
1
∝ ∝ (1 + z).
λkB
a
(1.20)
The ` = 1 mode, the dipole, corresponds to the local temperature gradient across the sky.
Since the motion of an observer doppler shifts the frequency of observed radiation the dipole
is dominated by our velocity through space — the CMB is blueshifted in the direction of
our motion. The net velocity of the Earth with respect to the CMB is 370 km/s; dominated
by the Local Group’s attraction to Virgo at ∼ 600 km/s, which is in opposition to the
∼ 200 km/s orbital velocity of the Earth/Sun around the Milky Way (Peacock, 1999). The
resulting dipole value is relatively high at TCM B · vγ = 2.57 mK, where vγ is the dipole
term, as compared to the theoretically predicted value of TCM B · vγ ∼ 30 µK. There is
no reason to expect them to agree: the Milky Way baryons long ago decoupled from the
primordial photon field.
29
1.4.2
Anisotropies from Acoustic Oscillations
The power spectrum defined in equations 1.17 and 1.18 describes the variance of fluctuations
in the temperature at the surface of last scattering for a given angular scale on the sky,
which corresponds to a physical scale at recombination. These fluctuations, referred to as
the temperature anisotropy, can be accurately predicted from basic physics given a set of
cosmological parameters.
Starting with a spectrum of initial density perturbations i.e., distributed in amplitude
and scale — established during inflation (see Section 1.4.4) — as the horizon grows from
time t0 , perturbations approximately at the horizon scale begin to evolve hydrodynamically
(the sound speed is near c). Baryons in over-dense regions are driven outwards by photon
pressure whereas those in rarefied regions collapse into dark matter potential wells. As new
modes are included with time, the photon-baryon gas undergoes acoustic oscillations until
recombination, at which point the photon field free-streams, preserving its final state. The
largest scale to have collapsed or rarefied completely at the time of recombination was that
which entered the horizon at a time before recombination that is directly proportional to the
sound speed.
The analysis of the acoustic oscillations is typically done in Fourier space where, since
perturbations are very small, the evolution equations are linear and different Fourier modes
evolve independently. Thereby instead of partial differential equations for the field, we are
left with ordinary differential equations. Furthermore, we can exploit rotational symmetry so
that all Θ(k) for a given k obey the same equations, where the monopole of the temperature
field is decomposed into Θ(k) such that,
Z
Θ`=0,m=0 (x) =
d3 k ik·x
e
Θ(k)
(2π)3
(1.21)
It is instructive to first consider the case of a perfect photon fluid, free of baryons and
30
dark matter potentials. In this case we have pγ = ργ /3 and cs ≡
√
p˙γ /ρ˙γ = 1/ 3, where pγ
p
and ργ are the pressure and density respectively and cs is the sound speed. The continuity
equation for Θ is then given by
1
Θ̇ = − kvγ ,
3
(1.22)
where vγ is the photon fluid velocity and corresponds to the dipole of the temperature field
(as previously stated). The Euler equation takes the form
v˙γ = kΘ.
(1.23)
Here time derivatives are taken over conformal time and spacial derivatives reduce to ik.
Combining the two equations yields a familiar oscillator equation,
Θ̈ + c2s k 2 Θ = 0
(1.24)
assuming negligible initial velocity perturbations, the solution to this equation is simply
Θ(η∗ ) = Θ(0) cos ks∗ ,
(1.25)
where Θ(0) is an initial temperature perturbation, η is the conformal time with c = 1, and
√
the sound horizon s ≈ η/ 3 is the distance a sound wave can travel by η, with asterisks
denoting quantities at recombination.
Equation 1.25 is essentially a standing wave or acoustic wave of compressions and rarefactions. There are two immediate consequences to this simple solution. The first is that
if ks 1, so that the spatial scale of interest is much larger than the sound horizon, the
temperature fluctuation at recombination is the frozen initial condition for that scale. The
second observation is that the magnitude of the perturbations will follow a harmonic rela-
31
tionship, with kn = nπ/s∗ for an integer n. This result can provide an estimate for the
extent of a peak in the CMB angular anisotropy; the physical scale of a perturbation is
λ ≈ π/k and the angular extent will be given by θn = λ/D∗ , the angular diameter distance
to recombination. For a flat universe D∗ ≈ η0 , where η0 is at z = 0, and for a flat matter dominated universe η ∝ (1 + z)−1 , where z∗ = 1100. This gives an angular extent of
∗ ≈ 1◦ or a multipole of ` ≈ 160.
θ1 ≈ η√
1
η 3
0
We can add small scalar perturbations to the metric from curvature and dark matter
potentials; expressed in the conformal Newtonian gauge (Dodelson, 2003) this is given by
0 = g + δg ,
gµν
µν
µν
(1.26)
where gµν is the flat FRW metric and the perturbation δgµν is given by

0
0
0
 −2Ψ

 0
2a2 Φ
0
0

δgµν = 
 0
0
2a2 Φ
0


0
0
0
2a2 Φ





,



(1.27)
where Ψ is the Newtonian potential and Φ is a perturbation to the spatial curvature. Both
of these terms will have an effect on the photon-baryon fluid, first through the addition of
a kΨ term to the right-hand-side of the Euler equation 1.23, creating competition between
photon-pressure and gravitational potential. Second, the curvature perturbations act as a
cosmological redshift coupling to the temperature and adding a −Ψ̇ term to the continuity
equation.
This changes the oscillator equation to its new form,
Θ̈ + c2s k 2 Θ = −
32
k2
Ψ − Φ̈,
3
(1.28)
and in the absence of pressure (so that the metric perturbations are constant), plus assuming
no baryons, the solution in the matter dominated epoch is,
[Θ + Ψ](η) = [Θ + Ψ](ηmd ) cos ks,
(1.29)
where ηmd is the start of the matter dominated epoch.
The result is a similar oscillator with the temperature fluctuation being replaced by an
effective temperature fluctuation, Θ → [Θ + Ψ]. This can also be thought of in terms of
photons leaving potential wells after recombination. Those in hot dense regions deep within
potential wells will incur a large gravitational redshift, decreasing their energy and therefore
effective temperature. This is result is similar to the result obtained by Sachs & Wolfe (1967)
where over-dense regions actually appear as cold spots.
The next component to include is the baryons, their corresponding effect is referred to
as baryon loading. In the simplest of terms, the baryons both change the sound speed, hence
the sound horizon, and add an additional contribution to the gravitational potential, thereby
enhancing the compressional modes in the acoustic oscillations and dampening the expansions. Yet this is mitigated by another effect referred to as radiation driving. Qualitatively,
the more radiation present in the universe the greater the expansion rate in the radiation
dominated epoch — in addition to a larger overall pressure — which both serve to smooth
gravitational potentials early on. This increases the amplitude of oscillations since the gas
is not drawn into potentials as much, which works counter to baryon loading.
A final effect is Silk damping (Silk, 1968), related to the the particle coupling. If the
fluid is optically thick at some scale τ̇ /k 1, where τ̇ ≡ ane σT is the differential Thompson
optical depth, then the photons and baryons are tightly coupled. If on the other hand the
fluid is optically thin the particles can “slip” past each other, dissipating the oscillations
and smoothing the perturbations. The precise role of Silk damping can be determined by
33
examining the Euler and continuity equations for both the photons and baryons.
The total number of each species is conserved separately so the continuity equations are
given by
k
Θ̇ = − vγ − Φ̇
3
δ˙ = −kv − 3Φ̇,
b
(1.30)
(1.31)
b
where δb and vb are the baryon density perturbation and velocity. The Euler equations
become
k
v˙γ = k(Θ + Ψ) − πγ − τ̇ (vγ − vb )
6
ȧ
v˙b = − vb + kΨ + τ̇ (vγ − vb )/R,
a
(1.32)
(1.33)
where R is the ratio of baryon to photon density and the two new important terms in these
equations are the differential optical depth and the radiation viscosity πγ . The radiation
viscosity is proportional to the quadrapole moment of the temperature distribution, which
are gradients in the photon velocity, and is suppressed by scatterings that eliminate the
quadrupole, i.e. πγ ∝ (kvγ /τ̇ ).
The modified equations result in the complete oscillator equation
c2s
k 2 c2s
k2
d
d −2
Φ̇)
(cs Θ̇) +
[Av + Ah ]Θ̇ + k 2 c2s Θ = − Ψ − c2s (c−2
dη
τ̇
3
dη s
2
(1.34)
The solution to this equation is an oscillator damped by a term e−k η/τ̇ , where the damping
p
scale kd ∝ τ̇ /η is the geometric mean of the horizon and the mean free path.
34
Radiation Driving
4
10
Diffusion
3
10
l
l(l+1)/2! C [µK2]
Initial Conditions
WMAP 5 yr
Second/Third Peaks:
Baryon Loading
2
Dampin
10
g Tail
First Peak:
Sound Horizon
1
10
0
10
1
10
2
10
Multipole
3
10
4
10
Figure 1.11: An annotated power spectrum relating features to the physical processes that
generate them. This figure is inspired by those of Wayne Hu. The spectra are generated
using the CAMB code (Lewis et al., 2000).
1.4.3
Decoding the Power Spectrum
As was shown for the simple case of a perfect photon fluid and scalar perturbations to
the metric in equations 1.24 and 1.25, the density perturbations evolve through acoustic
oscillations that result in the harmonic series of Fourier space peaks in the temperature
anisotropy. This is clearly seen in Figure 1.11, which shows the WMAP 5 yr power spectrum
from Figure 1.10, now annotated to highlight the important physical processes at each scale.
The curvature of space-time, baryon loading, radiation driving and Silk damping, all have
strong consequences for the position and relative amplitudes of the peaks in the harmonic
series.
35
The first peak corresponds to the largest angular scale harmonic mode in the spectrum,
which was able to complete only a single compression or rarefaction before recombination and
therefore the position of this peak corresponds to the sound horizon near recombination. The
physical scale of the first peak is determined by basic physics. Its angular scale, i.e. position
in multipole, will be sensitive to the underlying cosmology, in particular the curvature of
the universe Ωk . In a closed(open) universe a standard ruler at some distance will appear
larger(smaller) than one would expect for flat space and give the impression of being closer
(the angular diameter distance). Therefore the first peak shifts to lower(higher) multipoles
for a closed(open) universe as compared with flat space. The peak location also has a weak
dependence on the value of the dark energy density ΩΛ ; dark energy will also change the
effective distance a photon travels via accelerated expansion.
The subsequent maxima in the harmonic series will correspond to scales that were able
to complete additional compressions and rarefactions. Baryon loading will act to enhance
the compressional modes of the acoustic oscillations, in this case the odd peaks of the power
spectrum; the relative amplitude of the second peak with respect to the first (and third) is
an indicator of the baryon density Ωb . This might be inferred from comparing the second
peak to the first alone except that radiation driving can counter the effects of baryon loading
— a low ratio of matter to radiation density will result in the decay of potentials at early
radiation dominated times — enhancing all harmonics. The fact that the third peak is not
in fact suppressed below the second is a strong indicator of the total matter density Ωm is
relatively high.
Finally the damping tail of the CMB power spectrum also depends on these parameters
in a complicated way. At recombination, the mean free path grows rapidly and photons are
able to travel between hot and cold regions, erasing the perturbations. The optical depth
is regulated by Thompson scattering and thus strongly dependent on the baryon density;
higher values of Ωb result in less damping during recombination shifting the tail to higher
36
multipoles. Yet diffusion operates at all times and thus also depends on the age of the
universe at recombination, the longer the fluid has been oscillating, the more dampened it
will be. Thus higher values of Ωm slow the expansion and actually drive damping to lower
multipoles. By a multipole of ` ∼ 3000 the CMB power spectrum has been suppressed by a
factor of ∼ 1000 and is effectively zero above ` ∼ 4000.
1.4.4
Initial Conditions from Inflation
It has been shown that the observed features of the CMB temperature and polarization
may be well described by an analysis of acoustic oscillations in the photon-baryon gas. Yet,
the acoustic oscillations require an initial instability to start the oscillation. The theory of
inflation provides these initial perturbations, while solving other problems, of which perhaps
the most important is known as the horizon problem. The CMB is un-physically uniform:
the universe appears to be in thermal equilibrium over scales which are greater than the
horizon scale at the time of recombination. How did causally disconnected regions of the
universe know to be at the same temperature as each other?
Inflation solves it by positing that at some early stage in the universe’s evolution, an
exponential expansion of space-time occurred at rates so high that regions in causal contact
were stretched apart outside their horizon (by a factor ∼ e60 ). During the exponential
expansion quantum fluctuations, such as particle pair production, were frozen into existence
when they expanded beyond causal contact. In this way, inflation simultaneously produces
the initial perturbations. In most CMB and cosmological analysis, the initial power spectrum
of density perturbations is assumed to be a power law with a power law index, or tilt of n ≈ 1,
corresponding to a scale invariant spectrum. Inflation also produces an initial spectrum of
gravitational waves that is assumed to be scale invariant, with an amplitude parameterized
by the energy scale of inflation Ei .
37
1.4.5
Polarization in the CMB
If the acoustic oscillation paradigm is correct then a net polarization should be detectable:
a systematic check on the standard model. The same Thompson scattering process that
thermalizes the photon-baryon fluid pre-recombination will also generically result in a net
polarization of the intensity field at the surface of last scattering because the cross-section is
polarization-dependant. The Thompson scattering cross section is given by Chandrasekhar
(1960) as,
3σ
dσ
= T |ˆin · ˆout |2 .
dΩ
8π
(1.35)
where σT is the total Thompson cross section and (ˆin , ˆout ) are unit vectors in the plane
perpendicular to the direction of propagation that define the incoming and outgoing polarization directions (see Figure 1.12).
In the case of an isotropic field, any net linear polarization generated from radiation
incident from one direction will be erased by radiation of equal amplitude from an orthogonal
direction. Thus only an anisotropic field, in this case the local quadrupole, is capable of
inducing any net polarization, again represented in Figure 1.12. If the scattering is very
rapid in an optically thick environment then the direction of polarization will be randomized,
so only the polarization at the surface of last scattering will be preserved. The quadrupole
amplitude is proportional to the dipole with qγ ∝ (k/τ̇ )vγ , smaller than the temperature
by a factor k/τ̇ , calculated numerically to be of order 10%. The quadrupole moment is
effectively the spatial gradient in the dipole moment, reflecting the transition from “hot” to
“cold” regions and therefore, as can be see by equation 1.22, will be out of phase with the
temperature anisotropy.
The polarization pattern is described by the Stokes parameters Q and U (Rybicki &
Lightman, 1979) whose distribution on the sky can also be decomposed into spherical har-
38
Quadrupole
Anisotropy
!'
e–
Thomson
Scattering
!'
!
Linear
Polarization
Figure 1.12: Thompson scattering of radiation with a quadrupole anisotropy generates linear
polarization. Here, blue (thick) lines represent “hot” and red (thin) lines represent “cold”
incoming photons (Hu & White (1997)).
monics. The electric field of an elliptically polarized electromagnetic wave is given by
Ex = E0 (cos β cos χ cos ωt + sin β sin χ sin ωt)
Ey = E0 (cos β sin χ cos ωt − sin β cos χ sin ωt),
(1.36)
where β defines the ellipticity of the polarization state, χ is some arbitrary rotation of the
ellipse from the reference frame axes, ω is the frequency and t is time.
The general solution to the wave equation is a superposition of two orthogonal waves
~ = (x̂E1 + ŷE2 )e−iωt ≡ E~0 e−iωt ,
E
39
(1.37)
with an arbitrary phase such that the complex amplitudes are expressed as
E1 = E1 eiφ1 ,
E2 = E2 eiφ2
~ along x̂ and ŷ gives
Taking the real components of E
Ex = E1 cos ωt − φ1 ,
Ey = E2 cos ωt − φ2 .
(1.38)
Equating the two expressions for the electric field vector determines the Stokes parameters:
I ≡ E12 + E22 = E02
Q ≡ E12 − E22 = E02 cos 2β cos 2χ
U ≡ 2E1 E2 cos δ = E02 cos 2β sin 2χ
(1.39)
where δ = (φ1 − φ2 ). From these expressions we have
I 2 = Q2 + U 2
tan 2χ =
U
,
Q
(1.40)
where Q and U measure the orientation angle χ of the ellipse with respect to the reference
frame axes and I is the total intensity of the wave; the definition of U is equivalent to Q,
but rotated through a 45◦ angle.21
Unlike T , the definition of Q and U depend on the reference frame orientation χ and
21. I am ignoring circular polarization states V which are not predicted for the CMB.
40
transform like
Q + iU → e−2iχ (Q + iU ).
(1.41)
Given this representation, any polarization field may be decomposed into two modes, the E
and the B modes (Kamionkowski et al., 1997; Kosowsky, 1999). In this case, the decomposition into spherical harmonics looks like
Q(n̂) ± iU (n̂) = −
X
(E`m ± iB`m ) ±2 Y`m (n̂)
(1.42)
l,m
where ±2 Y`m (n̂) are spin-2 spherical harmonics and (E`m , B`m ) are the amplitudes of the E
and B modes respectively; so-called for their respective patterns on the sky. The E-mode is
a curl-free vector pattern akin to the electic field (but unrelated to E in the Stokes parameter
derivation above) and the B-mode is divergence-less, akin to the magnetic field. Just as with
the temperature field a set of power spectra are defined as
∗ E
EE
hE`m
`0 m0 i = δ``0 δmm0 C` ,
∗ B
BB
hB`m
`0 m0 i = δ``0 δmm0 C` ,
hΘ∗`m E`0 m0 i = δ``0 δmm0 C`T E ,
(1.43)
which are referred to as the E or B-mode auto spectra and the T E cross spectrum; these
spectra are shown in Figure 1.13.
The E-mode power spectrum is generated by the quadrupole anisotropy in the photonbaryon gas density as discussed above, but it is not possible to generate B-modes by scattering. The B-mode polarization is the result of tensor perturbations in the density which
is not a product of the scalar acoustic oscillations. The B-modes are generated either by
inflationary gravitational waves generating a temperature quadrupole by redshifting photons
41
4
10
TT
2
l
l(l+1)/2! C [µK2]
10
TE
0
10
EE
−2
Gravity Wave
BB
10
0
10
1
10
2
10
Multipole
Lensing
3
10
Figure 1.13: The WMAP 5 yr best-fit temperature TT power spectrum compared with the
polarization TE, EE and BB power spectra; dashed lines indicate negative values that would
not otherwise be displayed on a logarithmic plot. The polarization is expected to be at 10%
of the intensity, or 1% in power. BB is not generated by scalar perturbations in the density
and is the result of gravitational effects; here the tensor component from inflationary gravity
waves and the scalar component from foreground lensing by large scale structure are shown
separately and combined. A tensor-to-scalar ratio of r = 0.73 is assumed for the inflationary
component, corresponding to the current BICEP upper limit from Chiang et al. (2009). The
spectra are generated using the CAMB code (Lewis et al., 2000).
by gravitational lensing along the line of sight between the CMB and observer In both cases
the properties of the B-mode spectrum is a reflection of the E-modes. The amplitude of the
lensing B-modes are well determined from the growth of density perturbations but the gravitational wave component is not, depending entirely on the unknown energy scale of inflation
Ei , and parameterized through the tensor-to-scalar ratio r. The detection of B-modes is
perhaps the final frontier of CMB physics, particularly the inflationary component.
42
1.5
Large Scale Structure: Something in the Way of Things
After recombination the density perturbations continue to evolve, though now the dynamics
are driven primarily by gravity and the dominant dark matter particles that unstable to collapse. This evolution is largely determined by the energy density parameters Ωx , particularly
by ΩM at earlier times and more recently by the existence of a cosmological constant-like
dark energy ΩΛ component. As initially, only slightly overdense regions influence the matter
distribution around them, they collapse to quickly become very overdense regions and the
process develops nonlinearly. This means that in late times, the independent Fourier modes
of the matter distribution become correlated and not longer evolve simply like the acoustic
oscillations described in Section 1.4. Analytic solutions to the formation of large scale structure are only linear approximations and large computer simulations are required to properly
track the development. A very detailed review of the distribution of structure can be found
in Zentner (2007); here I summarize only the foundational concepts and definitions.
The initial distribution of density perturbations ρ(~x) is Gaussian and therefore fully
described by its three-dimensional power spectrum P (k), whcih is just the Fourier transform
of the two-point correlation function ξ(x~1 , x~2 ). Given a density contrast defined by
δ(~x) ≡
ρ(~x) − ρ̄
,
ρ̄
(1.44)
where ρ̄ is the mean matter density, such that ρ̄ = ρcrit ΩM , the two-point correlation
function is given by
ξ(x~1 , x~2 ) = hδ(x~1 )δ(x~2 )i
(1.45)
and by isotropy ξ(x~1 , x~2 ) = ξ(]x~1 − x~2 |). The power spectrum is the Fourier transform of
43
equation 1.45, i.e. P (k) ∝ h|δ(k)|2 i, where
δ(~k) =
Z
~
d3 xδ(~x)eik·~x .
(1.46)
For this Gaussian density field, the probability of finding a perturbation with contrast δ
in a given radius R is simply
P(δ, R)dδ = q
1
2πσ 2 (R)
e
−
δ2
2σ 2 (R) dδ,
(1.47)
where the parameter σ(R) is the rms fluctuation of the density field and is referred to as the
amplitude of initial density perturbations for some scale R. It is given by
σ 2 (R) =
Z
d ln k∆2 (k)|W (k, R)|2 ,
(1.48)
where ∆2 (k) ≡ k 3 P (k)/2π 2 is just the dimensionless power spectrum and W is a smoothing
window. For the somewhat arbitrary case that σ(R) is evaluated for a spherical window with
radius R = 8h−1 Mpc, it is dubbed σ8 ; this parameter is typically used to normalize the
power spectrum and is of order unity.
In the Press & Schechter (1974) structure formation formalism, when the density within
some scale exceeds a critical value δc it will become unstable to gravitational collapse and
form a halo. The number distribution of these haloes can be directly related to the probability
to find a perturbation of a given scale with a density contrast greater than critical such that
dn
dM
Z ∞
dP(δ, R)
ρ
=
dδ
M δc
dM
δc dσ(R)
∝
P(δc , R).
σ(R) dM
44
(1.49)
(1.50)
The function σ(R) can be related to the constant σ8 by an arbitrary function of radius R that
can be calibrated from simulations; the overall amplitude must be measured via observation.
Thus the distribution of matter is intimately tied to the cosmological parameter σ8 .
The amplitude of initial density perturbations σ8 has been measured via a variety of
galaxy cluster surveys which count the number distribution of clusters as a function of
redshift. These are conducted mostly in the optical and X-ray bands where clusters are
brightest; the mass of the object is inferred using mass proxies such as richness or X-ray
flux; the most recent and notable example being from Vikhlinin et al. (2009). Though there
was some discrepancy amongst various methods in the past, the current consensus value is
σ8 ≈ 0.8. Of course fluctuations in the initial perturbations as directly imprinted in the
CMB can be used to estimate values of σ8 and has been done by the WMAP experiment
(Dunkley et al., 2009).
Yet, the CMB as observed today also carries with it secondary anisotropies that are
imprinted by intervening large-scale structure, enhancing power at a range of scales. There
are a variety of such effects, for example, in roughly descending-angular-scale order:
The Integrated Sachs-Wolfe Effect (` < 100) results in the blueshifting of CMB photons as they pass through decaying potentials, particularly if dark energy is important.
Gravitational Lensing (` ∼ 1000) smoothes the CMB perturbations as photons are deflected from their path by large-scale structure.
The Thermal and Kinetic Sunyaev-Zel’dovich Effects (` ∼ 3000) result in a distortion of the CMB blackbody spectrum as photons scatter off of hot electrons in the
ionized intra-cluster medium and are potentially further doppler shifted by clusters
with large peculiar velocities.
(see Hu & Dodelson, 2002, for an exhaustive list). Coincidently, the latter effect becomes
important at the scales where the primary CMB power is exponentially suppressed in the
45
damping tail. As will be discussed in Chapter 6 the thermal Sunyaev-Zel’dovich effect is an
important secondary anisotropy for the subject of this thesis.
1.6
Thesis Outline
Having broadly set the modern cosmological stage and discussed the role of the CMB upon
it, the remainder of this thesis focuses attention on a measurement of the CMB power
spectrum made with the QUaD telescope and the interpretation of this measurement in
light the previous discussions. Numerous theses and papers have already been published by
the QUaD collaboration and this thesis draws broadly from this body of work, while adding
what I humbly believe is substantial new material.
The QUaD experiment was first proposed in Church et al. (2003), forecasted results were
presented in Bowden et al. (2004) and detailed descriptions of the commissioned instrument
optics and hardware can be found in O’Sullivan et al. (2008) and Hinderks et al. (2009)
respectively. Additional information about the design, forecasting, assembly, calibration and
commissioning of the telescope is available in the dissertations of Bowden (2004), Hinderks
(2005) and Zemcov (2006). The full set of temperature and polarization power spectra
derived from three years of continual observations covering a multipole range of 200 < ` <
2000 are presented in Ade et al. (2008), Pryke et al. (2009) and Brown et al. (2009). The
resulting spectra were then used in standard cosmological parameter analyses in Castro et al.
(2009) and to draw conclusions about fundamental physics in Wu et al. (2009). Adding to
this already impressive body of work, Friedman et al. (2009) extended the range of the power
spectra to multipoles of ` = 3000, probing the CMB damping tail for the potential signature
of secondary anisotropies.
Rather than refer the reader to this broad collection of papers, I have made an attempt
to encapsulate all of the relevant material into a single cohesive and self-contained narrative,
46
hopefully with enough detail to follow the analysis through to the conclusions of this work
with clarity, while omitting detail that would otherwise be distracting. In truth, this was
also an attempt to review the process myself and reach a deeper understanding of the context
and significance of my own work within this larger body. Though any truly collaborative
work is by definition not a solitary effort, I have attempted to clearly identify my distinct
relevant contributions whenever possible, sometimes to the detriment of flow, by relegating
the subject to its own chapters and sections or by switching from a passive voice to the first
person singular.
The progression of this thesis is then as follows. Chapter 1 was a broad introduction to
the history of modern cosmology, the current standard model description of the evolution of
the universe and role of the CMB within that model, as well as some abridged discussion of
the methods employed in analyzing the CMB. Chapter 2 describes the QUaD instrument,
paying greatest attention to the optical components and the foam cone in particular, which
was both my first undertaking in the QUaD collaboration and first experience working with
my advisor Clem Pryke — a hazing of sorts. The analysis of the telescope beams, directly
related to the optical components introduced in Chapter 2, is partitioned off to Chapter 4;
measuring the beams proved to be a daunting task that occupied much of my time working
with QUaD.
The observations of the CMB temperature anisotropy, resulting maps and simulations
intended to replicate this process are discussed in Chapter 3, where I pay special attention to
steps taken to characterize and mitigate the radio source contribution in our measurements.
The methods employed to calculate the power spectra are described in Chapter 5, accompanied by the results for ` > 2000. Finally, Chapter 6 uses these spectra to place constraints
on the power from secondary anisotropies due to the thermal Sunyaev-Zel’dovich effect and
considers these results in the context of other existing recent measurements.
Though QUaD was designed to make exquisite measurement of the CMB polarization
47
power spectra, I am ignoring this here as it is not relevant to the final conclusions of this
thesis. I refer the reader to the aforementioned QUaD references for more information
regarding polarization.
48
CHAPTER 2
THE QUAD TELESCOPE
Figure 2.1: The QUaD telescope inside its ground shield atop the Martin A. Pomerantz
Observatory (MAPO) at the South Pole.
The QUaD1 telescope (Figure 2.1) is a millimeter-wavelength bolometric polarimeter
located at the South Pole, which operated for three austral winters from 2005–2007. QUaD
was designed to study the polarization of the CMB anisotropy in the multipole range of
200 < ` < 2000.2 Figure 2.2 shows a schematic cross-sectional drawing of QUaD with the
major structural components labeled.
1. QUaD stands for QUEST (QU Extragalactic Survey Telescope) at DASI (Degree Angular Scale Interferometer, Kovac et al., 2002).
2. In practice QUaD achieved sufficient sensitivity to measure the temperature power spectrum up to
multipoles of ` ∼ 3000.
49
Figure 2.2: A schematic of the QUaD telescope structure highlighting the major mechanical
components as well as the azimuth, elevation and deck pointing axes. The primary mirror
is not labeled; it is located at the base of the foam cone, circumscribed by the guard ring.
Also note that the receiver is housed in a cryostat that is not labeled. Figure from Hinderks
et al. (2009)
50
The axisymmetric cassegrain telescope featured a 2.6 meter diameter primary mirror and
a 0.45 meter secondary mirror. Observing in bandcenters of 100 and 150 GHz it achieved
resolutions of ∼ 50 and ∼ 3.50 respectively. A noteworthy feature of the telescope was the
millimeter-wavelength-transparent foam cone that supported the secondary mirror atop the
primary; it is visible atop the telescope in Figure 2.1. The cone also served to shield the
mirrors from the ambient conditions, maintaining a stable room temperature environment.
Additional lenses located inside a cryostat — home to the detectors at the base of the primary
— refocused incident radiation onto a curved receiver focal plane, which resulted in a 1.5◦
wide field-of-view.
The receiver employed polarization sensitive bolometer (PSB) detectors — sensitive to
a single component of linearly polarized light — coupled to corrugated feed horns, which,
together with filters, acted as bandpass filters. An orthogonally oriented pair of PSBs were
mounted to the back of a single feed horn in an array of 31 such pixels, 12 at 100 GHz and
19 at 150 GHz; the pair difference isolates the polarized component and the sum provides
the total intensity.
In the following sections I briefly detail the major structural, optical and detector components of the telescope. In Section 2.1 I review the observation site, the pointing capabilities
of the mount and the major optical components. I defer an extended discussion of the
foam cone to Section 2.2. In Section 2.3 I detail the design and function of the bolometers,
cryostat, fridge and receiver.
2.1
The Optical Assembly
In this section I broadly describe the path that CMB radiation follows on its way to the
QUaD detectors; first through the atmosphere and then through the telescope’s warm and
cold optical components that finally direct incoming rays into the receiver feed horns.
51
Figure 2.3: A panoramic view of the South Pole site as seen from above the Dark Sector
Lab (DSL), which now houses BICEP and SPT. The QUaD ground shield can be seen at
the center, with South Pole Station behind it to the right. The clear sky is a hallmark of
the South Pole site (the crooked horizon is an image artifact).
2.1.1
The Observing Site and Mount
The South Pole is recognized as perhaps the best site for millimeter-wavelength astronomy on
Earth (see Lane, 1998; Lay & Halverson, 2000). It has hosted a number of CMB experiments,
some of the most recent and notable examples are ACBAR (Kuo et al., 2004), BICEP
(Takahashi et al., 2009) and SPT (Padin et al., 2008). See Kovac & Barkats (2007) for a
more detailed history. Figure 2.3 shows a panoramic view of the QUaD observing site at the
South Pole.
There are three key factors that make the location so exceptional: it’s high, it’s dry and
it’s cold. The South Pole sits on the Antarctic Plateau (though nowhere near the highest
point) at an elevation of 2800 m though it has an effective atmospheric pressure equivalent
to 3200 m near the equator (due to the Earth’s rotation). Moreover, the location is one of
the world’s driest deserts with less than 0.5 mm of precipitable water vapor more than half
of the time. This is due in part to the extremely low temperatures, reaching as low as -82 C
in winter.
The first two features result in an extremely low atmospheric column density, whereas the
last reduces the atmospheric loading on the detectors (see Section 2.3). At 100 and 150 GHz
52
the dominant sources of atmospheric emission are oxygen and water vapor respectively; the
latter of which is poorly mixed in the atmosphere (i.e. clouds). The six-month diurnal cycle
ensures minimum temperature fluctuations from day-to-day and the persistent katabatic
winds further stabilize the atmosphere and blows atmospheric inhomogeneities past the
telescope. Of course, in principle the diurnal cycle also allows for at least six months of
uninterrupted, continuous observation.
The telescope sits atop an isolated tower at the end of the Martin A. Pomerantz Observatory (MAPO) inside a ground shield mounted independently to the QUaD tower; this
minimizes building use and wind induced vibrations. The QUaD mount was recycled from
the DASI experiment. The mount allows the telescope to be fully rotated in azimuth and elevation but additionally a third deck-rotation axis allows for rotation about the pointing axis.
For an interferometer like DASI this allows for sampling more Fourier modes. For a polarization imaging experiment like QUaD, this adds a redundancy in sampling the polarization
Stokes parameters.
2.1.2
The Optical Path
QUaD utilizes an axisymmetric Cassegrain design illustrated in Figure 2.4 with two stages, a
warm (ambient) and cold (cryogenic) stage. Incoming radiation is first reflected by the 2.6 m
diameter parabolic primary mirror and then off the 0.45 m diameter hyperbolic secondary,
coming to a focus just above the surface of the primary — the warm stage. The secondary
mirror is supported by a millimeter-wavelength transparent, ∼ 2” thick, foam cone — manufactured from nitrogen-expanded Zotefoam PPA30 — described in detail in Section 2.2.
The reflectivity of the cone is only ∼ 2%, but this is unfortunately enough to generate farsidelobes that lead to contamination, which results in the loss of some data.
The first focal point is inside the receiver cryostat, which protrudes through a hole in the
53
Figure 2.4: An illustration of the QUaD optical path. The axisymmetric Cassegrain design
(a-left) first focuses the light to a point just above the primary using a warm 2.4 m parabolic
primary and 0.45 m hyperbolic secondary. There it enters the cryostat and is re-imaged onto
a curved focal plane using two 18 cm HDPE lenses. Spillover is mitigated with the cold-stop
(a-left), primary skirt (guard-ring) and ground shield (see the text). Figure from O’Sullivan
et al. (2008)
primary. Once inside the cryostat — the cold stage — a pair of lenses couple the Cassegrain
warm optics to a curved focal plane. The highly-curved, 18 cm lenses are manufactured from
anti-reflection coated, low refractive index, high-density polyethylene (HDPE).
The low refractive index and anti-reflection coating are both used to prevent reflections
within the cryostat and the lenses are cooled to reduce loading on the detectors. Spillover —
otherwise leading to sidelobes — is mitigated with a cold-stop between the final lens and the
focal plane, which tapers the Gaussian beam at -20 dB. Additionally a guard-ring (primary
skirt) reflects any spillover off the primary to the cold sky while the ground shield reflects
all potential lines-of-sight with the ground back to the cold sky. The secondary mirror was
also fabricated with a 48 mm hole in the center that prevents re- illumination of the cryostat
window and the protruding “snout” of the cryostat is surrounded by a reflective baffle which
54
fills the geometric shadow of the secondary.
2.1.3
The Focus Mechanism
The focus mechanism is located at the back-end of the secondary mirror; its purpose was
to provide an automated and precise way of adjusting the primary/secondary separation
and also house an internal polarized calibration source; a photograph of the mechanism is
provided in Figure 2.5.
The focus mechanism was constructed from two ∼ 0.4 m diameter quarter-inch thick
aluminum plates (disks), which match the size of the secondary mirror; the bottom plate
attaches to the mirror and the top to the foam cone top-cap (see Section 2.2). The disks
were light-weighted by milling away all of their bottom surface that was not directly used
for bolting to; the central portion of the disks were cut out to house equipment and expose
the secondary mirror hole to the sky.
The disks were coupled at three equidistant locations along their circumference via
stepper-motor based linear actuators3 and reinforced with low-friction lubricated guide-shaft
bearings. Inductance based limit-switch actuators restrict the plate separation to remain between 1.9 mm and 20.8 mm, approximately equivalent to 13,500 steps for a nominal motor
step distance of 1.5 µm. The motors are actuated by circuits housed in a control box which
also contains circuitry to operate a calibration source — flipping a small circular mirror in
and out of the secondary mirror opening and rotating a polarizing grid which obscures a
thermal blackbody source.
The focus mechanism is controlled remotely via an IR-link and is powered with a rechargeable battery which is replaced by hand and accessed through an opening in the foam cone
top-cap. Though the focus could in principle be adjusted in real time to compensate for
3. http://www.hsi-inc.com
55
Figure 2.5: The focus mechanism sits atop the secondary mirror, the two plates are slightly
ajar in this photograph. The IR-link is the angled silver square visible in the front right.
The blackbody calibration source is housed in the gray box in the center right, directly to
the left of which is the polarizing grid that is rotated by a motor and plastic gears (also
visible). The secondary hole is visible in the center of the frame with the flip-mirror directly
to its left. The white control box dominates the frame— the beautiful etching work courtesy
of J. Hinderks. Behind and to the right of the control box is the rechargeable battery. A
linear-actuator stepper-motor/guide-shaft bearing pair can be seen left of center in the front
and a second pair is visible in the back right; the third pair is hidden behind the control box.
The limit-switches are to the far-left adjacent to the first motor/shaft pair. The two long
bolts extending off frame are used to attach the mechanism to the cone top-cap (a third is
hidden behind the front-most bolt).
56
Figure 2.6: An illustration of the feed horn and PSB assembly. Incident radiation first passes
through a filter stack that filters high frequency radiation and then through the feed horn
throat which acts to filter out low frequency modes; these define the bandpass. Figure from
Hinderks (2005)
thermal expansion of the cone, it was in practice set twice at the start of the second and
third seasons.
2.1.4
The Pixel Array
The curved focal plane contains 31 corrugated feed horns, 12 optimized for 100 GHz and
19 for 150 GHz; these are illustrated schematically in Figure 2.6. A stack of metal mesh
filters are assembled at the mouth of the feed-horn and act as low-pass filters. At the other
end, the feed terminates in a narrow throat that acts as a waveguide cuttoff, high-pass filter.
The feed horns entirely define the bandpass with ∆f /f ≈ 30% at both frequencies. Two
PSBs are coupled to the base of the feed, coaxially aligned, oriented perpendicularly with
respect to each other and separated by 100 µm in a brass λ/2-cavity to maximize the field
amplitude.
Each PSB samples a linear combination of the Stokes parameters I, Q and U such that,
1 − Q cos 2θ + U cos 2θ
v=η I+
1+
(2.1)
where v is the bolometer readout voltage, θ is the bolometer orientation angle and η is a
57
0.8
0.6
0.4
Dec offset [deg]
0.2
0
−0.2
−0.4
−0.6
−0.8
0.8
0.6
0.4
0.2
0
−0.2
RA offset [deg]
−0.4
−0.6
−0.8
Figure 2.7: A diagram of the focal plane array pixel positions, the 100 and 150 GHz pixels are
colored blue and orange respectively, dashed lines indicate common-frequency radial groups
and the polarization orientations are represented by the interior crosses.
calibration factor which depends on the optical efficiency, detector responsivity, bandwidth,
etc. The prefactor to the polarization terms is the “cross-polar leakage”; it is a constant for
the bolometers related to the sensitivity to orthogonal polarization states and will hereafter
58
be neglected. From equation 2.1 is is clear that differencing two orthogonal bolometers
results in a measurement of a linear combination of Q and U at a given angle, whereas the
sum yields the total intensity — the quantity this thesis is concerned with.
Thus, each feed/bolometer pair combination (pixel) has a well defined bandcenter frequency and polarization orientation; Figure 2.7 illustrates the arrangement of the pixels in
the focal plane array. There is a central pixel nominally coaxial with the pointing axis and
four likewise nominally coaxial concentric rings of detectors around it, with each ring consisting of a single frequency group. The detectors are also aligned into seven rows of mixed
frequency separated by 0.02 degrees. Finally, there are two groups of pixel polarization orientations, rotated with respect to each other by 45◦ akin to the Stokes Q and U parameters.
The two polarization orientation groups allow one to solve for the Stokes parameters exactly;
the deck-angle rotation adds some degree of redundancy, allowing one to determine them
per pixel. The pixel array positions are measured through special calibration observations
discussed in Section 3.1 and Section 4.5.
2.2
The Foam Cone
Perhaps the most visually unique component of the QUaD telescope is the foam cone, which
supports the secondary mirror assembly. A foam cone was used in place of more traditional
metal support legs (struts) in order to minimize polarized, asymmetric scattering of incoming
radiation off the dielectric surface of such metal struts. This choice was motivated by a similar
design used in the COMPASS experiment (Farese et al., 2004). Additionally, the foam cone
provides the added advantage of insulating the mirror assembly and the snout of the cryostat
against the frigid and variable conditions of the polar environment. Warm air from a heated
room below the telescope mount vents out through the top of the cone keeping the interior at
a stable ∼ 15◦ C. The foam chosen for the cone was a closed-cell, dry nitrogen gas expanded
59
polypropylene foam, Propozote PPA30 manufactured by Zotefoams PLC. Unlike standard
foams, the propozote contains no water vapor or oxygen gas resulting in superior microwave
transmission properties.
Though it appears to be an exceedingly basic structural component, the construction of
the cone was by no means so; chiefly because the propozote foam was only available in flat
6’ by 3’ sheets ∼ 1.3 inches thick. This necessitated a multi-stage process to precisely shape,
cut and assemble the flat sheets of foam into a contiguous cone capable of supporting many
pounds of equipment from its apex. The cone was assembled from 18 individual triangular
sections (40◦ conical wedges), each made from a single sheet of propozote deformed to match
the curvature of the cone and cut to 1/9th of the cone’s circumference (a 40◦ angular extent)
then glued together into a uniform solid structure. The cone was constructed entirely on
site in the Enrico Fermi Institute high-bay at the University of Chicago and so required
the design and fabrication of uniquely tailored tools and equipment not otherwise available.
Furthermore, components were designed and manufactured to carefully couple the cone to
the telescope mirrors without introducing obstructions into the optical path or compromising
the integrity of the delicate foam.
2.2.1
Shaping The Foam
It was experimentally determined that when heated to ∼ 150◦ C, the propozote foam becomes
pliable without loosing its integrity, beyond which it begins to deteriorate. Upon cooling
back to room temperature the foam retains whichever deformations were introduced. In this
way the foam could be thermoformed to the desired conical shape. The challenge was to
build a suitable mold to force the foam into the precise conical shape we needed and find a
precision oven that could fit the sheets and mold. No such oven was readily available so it
was fabricated on-site.
60
Figure 2.8: Views of the underside of the oven lid (top) and the oven during operation
(bottom) under the watchful eye of Robert Schwarz. In the top photograph are one of
four fans and heaters. The fan blade protrudes below the lid and is stabilized by ceramic
legs, which insulate the hot interior from the wooden oven exterior. The resistive heater is
suspended below the fan and also attached by ceramic legs. The fiberglass insulation lining
the inside of the oven can be seen as well. The oven control box and four fan motors are
clearly visible in the bottom photograph.
61
The oven was roughly an 8’ by 8’ wide, 4’ tall, fiberglass insulated wooden box with a
removable top-side lid that was clamped down during operation. We suspended eight 3 kW
resistive heaters from the inside of the lid and set large circulation fans directly behind
them to help evenly distribute the hot air. The fan motors were located outside the oven mounted to the top of the lid so as to operate at room temperature - with only the drive shaft
extending inside. The heaters and fans were operated by a PID control box connected to a
thermometer affixed to the foam to bring and maintain the foam to the desired temperature.
The mold used to deform the foam sheets consisted of 2 halves, a bottom half curved
inwards and a top half curved outwards. The foam sheets were sandwiched between these
two halves and thereby forced into the desired shape by the weight of the mold. Each half of
the mold was fashioned from a series of evenly spaced aluminum plates — stacked from the
cone base to the apex — resembling a curved bookshelf when fully assembled. The plates
came in pairs, so that a given plate in one half of the mold had a mate in the other half.
Each plate pair was water-jet cut from a larger single plate, so that the discarded portion
corresponded to a cross-section of the cone segment at a given height from its base. Sheet
metal was riveted to the curved edges of the plates to form the interior contact surface with
the foam.
Each half of the mold weighed several hundred pounds, as did the lid of the oven. A
five ton capacity crane, which runs along tracks near the roof of the high-bay, was used
to lift and position these components within and atop the oven. The bottom half of the
mold was left permanently inside. To thermoform a sheet of foam, first the lid of the oven
was removed using the crane and set aside on supports. Next the top half of the mold was
removed from the oven and suspended with the crane. Prior to placing the foam sheets in
the mold, they were precut to roughly the correct dimensions with a handheld electrically
heated knife, which melted the foam as it was cut. The rough-cut foam was placed within
the mold, the top half was returned, and the lid was set and secured upon the oven. The
62
Figure 2.9: A rendered CAD drawing of the oven-form used to shape the foam sheets (top)
and the oven-form in action (bottom). In the drawing the sheet metal surfaces have been
omitted but should be clearly visible in the photograph. The aluminum plates are colored
purple and are bolted together via the cyan-colored, cylindrical aluminum spacers. The
conical outline of the foam cone should be clearly visible.
63
Figure 2.10: The pre-cut and oven-formed foam sheet is lain upon the wooden cone mandrel
(left) then the cutting track is positioned at one edge of the foam sheet (middle). A motorized
circular saw is inserted into the track and is used to cut a straight edge and the process is
repeated for the other edge. The final product is a precision cut conical section (right).
oven was turned on and the foam was slowly raised to 150◦ C, “soaked” for ten minutes, then
cooled back to room temperature. The entire process took about an hour per sheet even
when executed efficiently.
2.2.2
Assembling the Cone
Being as the purpose of the cone was to maintain azimuthal uniformity in the secondary
support structure, it was imperative that the wedges be assembled without gaps. Each
curved foam section needed to be cut with extreme accuracy into the desired conical wedge
dimensions; straight edges on either side of the segment were required. A full-size conical
wooden mandrel was constructed exclusively to serve as a work surface (i.e. base) on which
to cut and assemble the sections. A modified hand-held circular saw was used to cut the
wedges to size. The saw was mounted to a unistrut track, which extended from the apex
to the base of the mandrel; the mandrel was rotated about its axis between cuts. The saw
blade extended below the track a fraction of an inch into the surface of the mandrel. A
64
Figure 2.11: A rendered CAD drawing (right) showing the offset-seam layering strategy and
the vacuum-bagging technique (left) used to set the adhesive. In the drawing, the purple
segments comprise the bottom layer of cone segments and the yellow top layer segments are
offset so that seams of each layer do not overlap. The photograph shows the mechanical
pump used to evacuate the air under the plastic sheet (affixed to the mandrel) via a plastic
hose diligently monitored by the author. The photograph is a also a good illustration of the
cone’s size.
curved foam sheet was mounted to the mandrel then cut into a precise wedge by running
the circular saw up the track along pre-measured positions along the mandrel.
We chose to join the individual wedges by bonding together two stacked layers of shaped
and cut propozote sheets, offsetting the position of the wedge seams (edges) in each layer
by half a wedge, or 20◦ . This way, adhesive could be applied uniformly between the two
layers of foam as opposed to between the edges of two wedge sections. This again preserves
the uniformity of the design, distributing the glue equally around the cone. Furthermore,
the widely distributed adhesive produces a more secure bond. Its worth noting that the
65
Figure 2.12: After assembling and gluing all of the cone segments into a continuous cone, a
final cut is required to match the base and apex of the cone to specifications. The vertical
saw track is used again, though this time an electrically heated knife is clamped at a fixed
position and angle (left). The track is left stationary and the wooden mandrel is rotated
(center) trimming off excess foam and producing a final precision cut cone (right).
propozote sheets are produced with a laminate skin on the surface of the sheets. In order
to aid in the final assembly and improve transmission, we had the laminate removed from
the sheets prior to receiving them. We used Scotch 924 transfer film adhesive, applied with
a tape gun, to bond the two layers of foam.
Though the shaping process worked exceedingly well, it was by no means perfect. Still,
the main problem was that and the adhesive required strong and even pressure around the
cone to set it against the resistance due to the non-ideality in the shape of the wedges. To
provide this pressure we wrapped the weakly assembled cone in a plastic sheet, secured it
against the underlying mandrel with plastecine strips and evacuated the air between the
sheet and mandrel using a vacuum pump. This provided the necessary pressure over the
whole surface to set the adhesive bond, better shaped the foam into the cone shape of the
mandrel and removed air bubbles between the two layers.
Finally, once the adhesive was secured, rendering the cone a whole, a final trim was
66
Figure 2.13: The top cap (left) and the bottom channels (right) used (respectively) to couple
the secondary mirror to the cone and the cone to the primary mirror. Both pieces are
constructed in pairs which bolt together to clamp them to the foam cone at the apex and
base edges.
needed to cut the base and apex of the cone to specification. The cone apex was to be open
to expose the hole in the secondary mirror to the sky and the top edges had to be flat and
level. The edges at the base of the cone needed to be cut to an angle that would match
the slope of the primary mirror guard on which it would stand. The heated knife used to
rough-cut the cone was now mounted to the circular saw track at the required angle and
location along the face of the cone. The cone was rotated under the track to make an even
circular cut along the apex and base.
67
2.2.3
Coupling the Cone to the Mirrors
The pieces used to couple the foam cone to the primary and secondary mirrors are shown in
Figure 2.13. There are two sets of coupling pieces, the top cap and the bottom channel. Both
sets are intended to clamp to the foam cone along the full circumference of either of its two
edges. This design maintains the uniformity of the overall cone and serves to both reinforce
the shape and integrity of the cone while also preventing any drilling into the cone. The
pieces were produced from fiberglass by a private manufacturer. To improve the frictional
force between the fiberglass pieces and the foam cone, the interior surfaces of the coupling
pieces were lined with an adhesive backed sand paper.
The top cap was comprised of two parts, a bottom and a top; the bottom (top) piece
was placed inside (outside) the cone making contact with the interior (exterior) surface of
the cone. The two parts were designed to make contact with each other and clamp onto the
edge of the cone when bolted together (using fiberglass nuts and bolts) inside the opening
at the apex. The bottom channels were likewise designed in an interior/exterior pairwise
fashion, though they were further segmented about the circumference of the cone into nine
pairs since a continuous ring would have been unreasonable to produce or work with. The
seams between the exterior channels were offset from those of the interior channels and also
offset from the seams in the foam cone. The bottom channel (i.e. the bottom edge of the
cone) was positioned to lie outside of the primary mirror; the two halves were bolted to each
other and to the guard ring outside of the cone, again using fiberglass nuts and bolts.
2.2.4
Shipping the Foam Cone
It is worth devoting a very short section to the interesting challenge of shipping the cone
to the South Pole; the major obstacle being its size. The width of the base of the cone
exceeded the width of the cargo doors of the LC-130 planes that fly from the polar staging
68
base in Christchurch, New Zealand (CHC) to McMurdo (MCM) Station, Antarctica or from
MCM to the South Pole (NPX). To get around this problem we designed a box that would
accommodate the cone tilted at an approximately 30◦ angle, reducing the required width
by about 10%. The resulting box was 97” by 127” wide and 96” high, framed by 6” by 6”
wooden posts and 4’ by 8’ plywood sheets, weighting approximately 1800 lbs as compared
to the ∼ 50 lb cone.
The two cones were packed in expandable foam and shipped to Port Hueneme, California
on November 5th, 2004; the box is pictured opened with the cone inside and the loaded on
a flat-bed truck in Figure 2.14. The intended route to Pole was by container ship to MCM
from the Naval base at Port Hueneme but by the time the cone arrived on November 8th,
the last ship had sailed. So, from Port Hueneme the cone was sent to the nearest Air Force
Base with the hope that it could be sent via C-17 cargo planes destined for CHC for use in
polar operations. The Air Force rejected the crate citing it was “too light for its size”, an
odd but fairly accurate complaint. Further options for direct shipment to CHC were limited;
at the time the cone was in California, the Long Beach Port dockworkers were on strike,
backing up sea freight and redirecting it to cargo planes.
Finally an available cargo operation was identified, the cone was set to fly by Cargolux4
Nov. 28th — the other way around the world — from Los Angeles, California (LAX) to
Luxembourg then on to Auckland, New Zealand (AKL), likely via O’Hare airport in Chicago,
Illinois (the cone’s point of origin). From AKL on the north island of New Zealand, the cone
traveled by truck then ferry to the south island, where it finally made its way down a two-lane
highway to CHC. From CHC the cone traveled by C-17 to MCM and LC-130 to NPX without
a hitch, arriving at Pole on December 16th 2004, just before flight operations ceased for the
Christmas holiday. The foam cone, on site at Pole, with top-cap attached and bolted to the
4. http://www.cargolux.com/
69
Figure 2.14: The completed cone with bottom channels (top left) and top cap (top right)
attached. In order to fit the cone into a commercial cargo plane it was necessary to construct
a box that accommodated the cone tilted at an approximately 30◦ angle (bottom left). The
cone is seen leaving the University of Chicago Enrico Fermi Institute high-bay (bottom right).
telescope guard-ring, set for regular operation, is shown in Figure 2.15 with the telescope
pointed at the horizon.
70
Figure 2.15: The completed cone on the telescope, bolted to the primary guard ring.
71
2.3
The Receiver
In this section I discuss some of the remaining components of the telescope that are relevant
to subsequent discussion in this thesis. Having reviewed the path of incident radiation from
the sky to the PSBs, I now turn my attention to the bolometers themselves, the receiver
assemblage and the cryostat. I forgo any discussion regarding the circuitry or data acquisition
and storage.
2.3.1
The Bolometers
A bolometer measures the temperature increase of an absorber that is weakly coupled to a
temperature bath (sink). In a polarization sensitive bolometer the absorber is constructed
as a set of parallel-line conductors. The QUaD PSBs were manufactured by the JPL Micro Devices Laboratory which also produced similar detectors for the Planck satellite and
identical detectors for the BICEP experiment (Takahashi et al., 2009). These detectors
are a silicon-nitrite (Si3 N4 ) membrane that has been metalized with gold over titanium;
Figure 2.16 shows a scanning electron micrograph of a single PSB.
The parallel traces are spaced at 105 µm and can be used at either 100 or 150 GHz if
coupled to the appropriate feed horn. The absorber grid is reinforced by perpendicular nonmetalized traces and is supported by a series of non-metalized legs which are attached to the
thermal bath; the grid is coupled to the bath via three thicker metalized legs, which were
also designed to be ablated to adjust the conductivity. The temperature of the absorber
is measured by a neutron-transmutation-doped thermistor, which sits along a conducting
ring that runs along the outside circumference of the grid. Electrical signals to/from the
thermistor are carried by the middle one of the three thicker legs.
The thermal equilibrium state of a bolometer is quantitatively expressed through the
72
Figure 2.16: Scanning electron micrograph of a single PSB membrane. The 4.5mm diameter
absorber is suspended by an array of radial supports Metalized traces run from the upper
left to the lower right and perpendicular, nonmetalized traces provide mechanical support.
The NTD thermistor is located on the upper left edge of the absorber and the three thicker
legs running from it couple the absorber to the housing and the central one carries charge.
The scale (bottom right) is 2 µm. Figure from Hinderks et al. (2009)
power balance equation
P +Q=
Z T
bolo
G(T )dT,
(2.2)
Tbath
where P is the power dissipated by the thermistor, Q is the power (heat) deposited by incident
radiation, and G(T ) is the thermal conductivity between the absorber and bath. The term
Q will include contributions from unwanted sources of radiation such as warm components
along the optical path, contamination via extended sidelobes and from the atmosphere —
73
these are referred to collectively as sources of “optical loading”.
The average loading per detector amounts to ∼ 30 K in Raleigh-Jeans temperature with
approximately equal contributions of ∼ 10 K from the telescope and atmosphere each. The
atmospheric loading varies from day-to-day depending on weather conditions and on the
elevation angle of observation as Tload (θ) ∝ τ Tatm csc(θ), where τ is the optical depth.
The latter can be exploited to normalize the detectors to the same relative calibration as is
discussed in Section 3.2. Excessive loading must be prevented as it will result in a loss of
sensitivity from an increase in photon noise and a decrease in detector responsivity.
The bolometer thermistor is wired in series to two load resistors (RL ) and under normal
operations it is biased by a current Ibias so that
Vbias
Vbolo
=
.
R(T )
2RL + R(T )
(2.3)
The resulting power dissipated by the thermistor is given by
2
V
2 R(T ) ,
P = bolo ≈ Vbias
2
R(T )
4RL
(2.4)
where R(T ) is the thermistor resistance and RL R(T ). Referring back to equation 2.2,
the bias voltage Vbias is then given by
2 ∝
Vbias
1
R(T )
"Z
Tbolo
#
G(T )dT − Q
(2.5)
Tbath
(Hinderks, 2005).
The term inside the brackets on the right-hand side of equation 2.5 is effectively fixed for
typical optical loading. Thus a change in the bias voltage results from a small change in the
thermistor resistance when the bolometer is heated by an increase of incident photons (or
74
Figure 2.17: A schematic of the QUaD cryostat. Figure from Hinderks et al. (2009)
cools from a decrement). This is the bolometer signal, referred to in equation 2.1, which is
amplified and read out by the data acquisition software.
2.3.2
Cryogenics
The cryostat maintains the temperature of receiver, cold optics and cold electronics for
extended periods of time; a schematic is shown in Figure 2.17. The top of the cryostat fits
75
into a hole in the primary mirror, exposing a cylindrical snout, which houses the receiver
window and the first of the two cold lenses. The receiver core, comprised of the focal
plane array and associated electronics, is inserted through the bottom of the cryostat and
is surrounded by two concentric toroidal cryogen tanks. The outermost 35 L tank holds
liquid nitrogen (LN2) at a temperature of ∼ 70 K while the innermost 21 L tank holds liquid
helium (LHe) at a temperature of 4 K. The LN2 is used primarily to pre-cool the LHe tank;
it takes over a week of cooling for the central chamber of the cryostat to reach 4 K.
A three-stage He-4/He-3/He-3 sorption refrigerator is used to reach the ∼ 250 mK receiver operating temperature. The first two stages containing He-4 and He-3 are referred
to as the Intercooler (IC) and can reach 430 mK while the third Ultracooler (UC) stage
contains He-3 only and reaches the final temperature of 253 mK. The IC and UC can hold
for 31 and 24 hours respectively before the fridge must be cycled, which takes approximately
five hours. Correspondingly the receiver core (see Figure 2.18) is divided into three stages:
a 4 K baseplate bolted directly to the LHe tank, a 430 mK intermediate stage coupled to
the fridge IC and the 250 mK focal plane coupled to the fridge UC. The baseplate houses
the JFET amplifier boxes and the three stages are separated and supported by insulating,
Vespel-tube legs in a hexapod configuration.
76
Focal plane
(250 mK)
Intermediate
stage
(430 mK)
FET
box
Baseplate
(4 K)
Figure 2.18: A photograph of the QUaD receiver core. The feed horns are situated prominently at the top of the curved focal plane. Figure from Hinderks et al. (2009)
77
CHAPTER 3
FROM SKY TO MAP
This chapter broadly reviews the entire data analysis pipeline from the observation strategy
to the final sky temperature anisotropy bandpowers. The bandpowers produced by the
steps outlined here aren’t purely of cosmological origin but are free of terrestrial noise and
contamination. I defer a discussion of the origin of the sky power to Chapters 5 and 6.
The observation strategy is reviewed in Section 3.1 followed by the low–level data processing and calibration in Section 3.2 and the mapmaking process in Section 3.3. The maps
contain obvious point sources at both frequencies that must be removed or masked to prevent
them from overwhelming the CMB signal at small angular scales; the systematic process of
identifying the point sources is covered in Section 3.4.
The QUaD analysis pipeline broadly follows the MASTER analysis introduced by Hivon
et al. (2002). The general idea is to derive an unbiased estimate of the true sky bandpowers
by using simulations of the entire data reduction and spectra calculation pipeline to subtract
noise and account for the effects of filtering and beam convolution. This is the motivation
for the simulations covered in Sections 3.5 and 3.6. Though beam characterization is integral
to the pipeline, I defer the detailed discussion of it to Chapter 4.
The QUaD telescope was designed and equipped as a polarization experiment and many
of the observations and analysis procedures were tailored to produce high signal to noise
polarization maps and power spectra free of systematics. Therefore, I omit any discussion
of calculating Stokes parameter Q and U maps as they are irrelevant to the subject of this
thesis.
78
12h
18h
6h
−75
−60
−45
−30
0h
Figure 3.1: The QUaD field outlined in white along with the B03 deep and shallow regions
outlined in red on an equal area azimuthal projection about the SCP. The line bisecting the
QUaD region distinguishes the lead and trail fields (see text). The map is a dust emission
intensity prediction at 150 GHz from the FDS model 8 (Finkbeiner et al., 1999). The color
scale is linear from 0 to 100 µK, and is heavily saturated. The white asterisk and cross show
the locations of RCW38 and PKS0537-441 respectively. Figure from Pryke et al. (2009)
79
3.1
Observation Strategy
The QUaD CMB observation field is centered at 5.5h Right Ascension (RA), -50◦ Declination
(Dec) and covers ∼ 70 square degrees. The field (see Figure 3.1) was chosen for its relatively
low level of large-scale foreground emission associated with the Galaxy and for overlap with
the Boomerang 2003 (B03) fields (Montroy et al., 2006) — used for absolute calibration
(Section 3.3.5). The CMB observations were made from the South Pole during the three
Austral winters of 2005–2007 when theSun was below the horizon. Due to complications
with the focus and beams in the first season of observations that are discussed in Chapter 4,
the analysis presented in this work makes use of the second and third seasons only.
At the South Pole the celestial sphere rotates about the zenith once every 24 hours.
Accordingly, the CMB observations are organized into ≈24-hour periods starting at the
same LST each day with 19 hours devoted to the field observations and the remaining five
reserved for cycling the fridge and refilling the cryostat — during which time the CMB
field is otherwise inaccessible as it passes behind the thermal plume of the heated MAPO
observatory building that houses QUaD.
There were a total of 289 days of observations over the 2006 and 2007 seasons. Of
these, 44 were rejected after low-level processing due mostly to extreme weather while an
additional 102 days were rejected due to moon contamination either upon visual inspection
or after being flagged by a conservative cut determined from the far circular sidelobe model
discussed in Section 4.7. The remaining 143 days result in broadly uniform coverage of the
field. The relatively crude rejection of entire 24-hour blocks of data is done as a convenience
and though potentially some of this data might be recovered through more careful selection,
√
it could never result in better than a ∼ 2 improvement in sensitivity.
The daily observations are divided into two 8-hour blocks bookended by calibration observations: Between the 8-hour blocks, the entire telescope is rotated 60◦ about the pointing
80
52
−52
51.8
−51.8
51.6
−51.6
51.4
−51.4
51.2
225
250
275
Azimuth
300
325
Declination
Elevation
Lead Field Scan
Trail Field Scan
75
100
Right Ascension
Figure 3.2: An illustration of the QUaD scan pattern for a single 8-hour period. Each
horizontal line represents one of 10 half-scans at a fixed elevation. The lead field (blue)
and trail field (orange) are observed back-to-back with the trail following the lead after four
elevation steps (40 half-scans, four scan-sets) repeating the same coverage in azimuth and
elevation, but offset 0.5h in RA. Note the x and y-axis scalings distort the shape of the
coverage area, which is actually a letterbox.
axis (boresight) — in what is referred to as a deck–rotation — and then allowed to thermally
stabilize for 30 minutes.
The 8-hour blocks are further subdivided into 16 half-hour blocks during which the telescope is scanned 20 times back-and-forth (for a total of 40 half-scans) at 7.5◦ in azimuth,
at a rate of 0.25◦ per second (corresponding to a 4.8◦ and 0.16◦ per second on sky at the
pointing Dec). After five consecutive scans (a scan-set) the telescope is stepped 0.02◦ in
Dec so that each half-hour block completes a 0.16◦ × 4.8◦ letterbox strip of sky, which is
then paired to the next four scan-sets in a lead-trail fashion. Figure 3.2 illustrates the scan
pattern for an 8-hour period.
81
The first letterbox (the lead-field) is observed centered at 5.25h RA; the second letterbox
(the trail-field) is observed over the same azimuth range, exactly 30 minutes after the start
of the lead-field, with the celestial sphere rotated by 0.5h RA so that it is centered at 5.75h
RA. Since the lead and trail fields cover the same range in azimuth they can be differenced
to remove any ground-stationary signal stable over a 30 minute timescale — at the expense
of sensitivity.
This scheme results in eight lead-trail pairs per 8-hour block, then the telescope is deckrotated and the exact same fields are repeated again, i.e. a single 0.64◦ × 4.8◦ lead-trail field
pair is observed at two different deck-angles per day. The scan strategy covers the entire
QUaD CMB field in ≈ 9 days, then is offset by 0.16◦ Dec and repeated four times over the
course of ≈ 37 days, for a total of ≈ 8.5 times over the course of the 2006 and 2007 seasons.
Additionally, interspersed amongst the field observation blocks are a series of special
calibration observations. Before and after the 8-hour blocks, each row of the telescope array
is scanned in azimuth across RCW38 (a rowcal) — intended to measure relative gains and
monitor the beam — and the central pixel is scanned in azimuth and elevation across a
pointing source (a cross-obs). The half-hour blocks are preceded by additional calibration
observations intended to measure the relative detector gains; the entire telescope is scanned
in elevation (an elnod) and an internal calibration source is observed (the calsrc). The role
of these calibration observations will be discussed in more detail in Section 3.2.
The full telescope pointing solution is also occasionally updated via cross-obs covering
a set of radio pointing sources distributed across the sky. Finally there were 11 dedicated,
full-day observations of the beam calibration source RCW38 (beamruns) throughout 2006
and 2007 and 9 beamrun observations of PKS0537-441. These observations are intended to
accurately measure the beam profiles and are exhaustively discussed in Chapter 4.
82
3.2
Low Level Data Processing
Here I describe the low level processing necessary to prepare the timestream data for the mapmaking stage. These steps include deconvolving the finite temporal detector responsivity,
normalizing all the detectors to the same instantaneous gain, correcting for any long term
gain fluctuations and finally measuring the noise properties of the data to ensure adequate
quality data.
3.2.1
Deconvolution
The first step in the data processing is doconvolution. For an ideal bolometer, the response
function in Volts/Watt is given by
S(ω) =
S0
,
1 + iωτ
(3.1)
where τ = C/G is the detector time-constant — C and G are the heat capacity of the
absorber and the thermal conductivity of the detector with the thermal bath respectively
(Hinderks et al., 2009). This finite temporal response effectively acts as a low-pass filter
suppressing higher frequency modes in the sky signal.
The detector time-constants are measured by observing a square wave modulated signal
supplied by a switched — every 100s — Gunn diode RF source illuminating the focal plane in
situ at Pole. The step function signal convolved by the detector response directly measures
the transfer function, which for an ideal bolometer is given by
H(ω) =
1
.
1 + iωτ
(3.2)
In practice this function is fit to the Fourier transform of the observed Gunn signal; Figure 3.3
shows the result.
83
Figure 3.3: A square wave reference signal from a Gunn diode source (dashed black) was
used to measure the bolometer time constants. The observed signal (top, colored solid) is
convolved by the detector responsivity; the Fourier transform yields the transfer functions
(bottom, colored solid). The detector transfer functions are fit to single (dotted black) and
dual (solid black) time-constant models (see text). For many of the detectors, the single time
constant (τ1 ≈ 30ms) model is adequate (left), though for some detectors a second longer
time constant (τ2 ≈ 1s) is necessary (right). Figure from Hinderks et al. (2009)
84
For many of the detectors this single time-constant model is an adequate fit with values
of τ ≈ 30ms but for some a second much greater time-constant is required:
H(ω) =
1−ϑ
ϑ
+
.
1 + iωτ1 1 + iωτ2
(3.3)
with ϑ ≤ 0.5 and τ2 ∼ 1s. The second time-constant is believed to be due to debris on the
detectors that couple weakly to the bolometer grid. For two of the 100 GHz bolometers this
model is still insufficient to describe the transfer function and they are excluded from the
analysis; therefore we loose two PSB pairs.
To test the success of our fits and the long term stability of the time-constants we also
examine 11 observations of RCW38 (see Chapter 4). Maps of RCW38 are constructed
from forward and backward scans alone and then differenced to produce a jackknife map
(Section 3.3). In the event of a deconvolution failure, the shape of the point-like source
would be distorted and the two directional scan maps would appear as mirror images and
wouldn’t properly cancel. The jackknife maps show no evidence of a residual confirming the
deconvolution is accurate.
During to the deconvolution process, the timestream is low-pass filtered to 5 Hz and
downsampled to 20 Hz. Finally a deglitching algorithm flags data with evidence for cosmic
ray hits and removes the entire affected half-scan from the analysis eliminating < 1% of the
data.
3.2.2
Relative Gain Calibration
Each of the detectors in the array has its own individual gain level i.e., Volts/Kelvin, and
before the data can be coadded these gains need to be equalized across all channels. To
measure the relative gains of all the detectors it is necessary to illuminate the focal plane
uniformly and compare their response: the atmosphere is an ideal beam filling source. In an
85
Elevation (deg)
Detector signal (Volts)
50
49
48
1.0
10
8
0.5
6
4
2
0.0
0
10
20
30
seconds
40
50
0
0.6 0.8 1.0 1.2 1.4
Relative channel gain
Figure 3.4: During an elevation nod (elnod) the telescope is scanned in elevation by ≈ 1◦
(top panel) injecting atmospheric signal into the timestream (bottom panel). The timestream
of each detector is regressed against a csc(θ) model (red line) to calculate a Volts/Airmass
calibration factor; all detectors are normalized to the mean calibration factor for a given
frequency. The resulting relative gains are normally distributed with a ≈ 10% fluctuation
(right panel). Figure from Hinderks et al. (2009)
elnod — performed prior to every half-hour observation block — the telescope is scanned
≈ 1◦ in elevation, injecting a common mode ramp into the timestream of each detector (see
Figure 3.4). The ramp is associated with the decreasing(increasing) integrated column of
atmosphere observed at an increasing(decreasing) pointing elevation: proportional to csc(θ).
The timestream data dt,i of each detector i is regressed against this model to determine a
calibration factor in Volts/Airmass ξi . The data is then normalized to the mean gain for all
channels at a given frequency ξ¯ so that the calibrated data is given by
d0t,i =
ξ¯
d ,
ξi t,i
86
(3.4)
¯ is the relative gain factor; the distribution of these are shown in Figure 3.4. There
where ξ/ξ
i
is a fluctuation from day to day in the relative gains at the ≈ 1% level that is not understood
but treated as real.
3.2.3
Long Term Gain Stability
While the elnods are effective at determining the short term relative gain calibration, they
only provide limited information about the long term gain stability — the elnod gain factors
depend on the integrated airmass, which can fluctuate depending on weather conditions. To
monitor long term gain, QUaD employs an internal thermal blackbody calibration source,
modulated by a rotating polarizing grid. The calsrc is located inside the foam cone behind the
secondary mirror and can be projected into the beam through the secondary hole via a small
flip mirror (see the schematic in Figure 3.5 and the photograph of the focus mechanism
in Figure 2.5). As the grid is rotated the calsrc signal varies sinusoidally; the amplitude
of this sinusoid is measured before every half-hour observation block. Figure 3.6 shows
the resulting calsrc amplitude for the second and third seasons as compared to the elnod
calibration factors. The strong anti-correlation between the elnod and calsrc is believed to
be caused by gain suppression through increased atmospheric loading. A gain suppression
correction factor is derived by regressing the mean calsrc against the mean elnod timeseries,
which is then applied to the data.
3.2.4
Noise Properties
The processed timestream data should now be normalized between all detectors at a given
frequency over all days, be free of defects and reflect the true sky intensity at the telescope
pointing. Figure 3.7 shows an example of the processed timestream for a single elevation
scan-set of ten half-scans.
87
Rotating grid
Flip mirror
Blackbody
source
Secondary
IR data
link
Cryostat
snout
(NOT TO SCALE)
4.20
2
Signal, high-gain mode (Volts)
Signal, low-gain mode (Volts)
mirror up
4.15
mirror down
4.10
grid rotating
4.05
mirror frame passing
through beam
4.00
0
5
10
15
20
Seconds
25
1
0
-1
0
30
2
4
6
Seconds
8
Figure 3.5: The QUaD telescope is equipped with an internal calibration source inside the
foam cone and located behind the secondary mirror. A flip mirror (activated by a remote
IR link) is rotated by 45◦ to position it within the secondary mirror opening and reflect the
source into the telescope beam (top panel). A polarizing grid is rotated in front of the stable
blackbody source to produce a sinusoidally modulated signal (bottom panels). Figure from
Hinderks et al. (2009)
88
elnod V/airmass
raw calsrc
60
50
40
30
0.52
5
0.5
0.48
0
corrected calsrc
0.46
0
2
4
6
0.52
0.5
5
0.48
0
0.46
Apr06 May06 Jun06 Jul06 Aug06 Sep06 Oct06 Nov06 Dec06 Jan07 Feb07 Mar07 Apr07 May07 Jun07 Jul07 Aug07
0
2
4
timeseries % fluctuation
6
Figure 3.6: The elnod gain calibration factor (left, top) shows daily variation that is strongly
anti-correlated with fluctuation in the calsrc calibration factor (left, middle) for a sample
detector. This is interpreted to be due to gain suppression from increased optical loading on
the detectors; the calsrc fluctuation is suppressed when the correlation is regressed out (left,
bottom). The histograms (right) show the fluctuation in the timeseries data for all detectors
before (top) and after (bottom) gain suppression correction. Figure from Pryke et al. (2009)
The dominant feature of the data is a strongly correlated signal between all detectors
at a given frequency due to atmospheric emission from oxygen and water vapor at 100 and
150 GHz respectively. Since water vapor tends to be clustered (i.e. in clouds) whereas oxygen
is diffuse, the 150 GHz fluctuations tend to be stronger. The signal is correlated between all
channels because the beams are coincident throughout much of the lower atmosphere and do
not diverge until very high altitude, moreover the signal is not scan-synchronous implying
that the air currents drive atmospheric fluctuations past the beams at a faster rate than the
telescope is scanning.
The strong atmospheric component is evident in the timestream power spectral densities
(PSDs) plotted in Figure 3.8 as a low frequency 1/f ramp. For the QUaD scan speed
(0.25◦ per sec in azimuth at an elevation of ≈ 50◦ ) the conversion between multipole and
frequency is ` ∼ 2000f , therefore the QUaD CMB science results from 200 < ` < 3000
cover the spectral range of 0.1 < f < 1.5, i.e. the “Science band” (which is shown in the
figure for 200 < ` < 2000). The atmospheric-noise-free pair differenced PSDs show that
89
Azimuth (deg)
−38
−40
−42
−44
−46
−48
pair sum (mK)
10
5
0
−5
darks (mK)
pair diff (mK)
−10
−15
5
seconds
0
−5
5
seconds
0
−5
0
50
100
150
200
seconds
250
300
350
Figure 3.7: The timestream data for 100 GHz (red) and 150 GHz (blue) summed (second
panel from top) and differenced (second panel from top) for all PSB pairs. The data show
no evidence for scan synchronous signal when compared to the scan pattern in azimuth (top
panel) but do show strong correlations from atmospheric signal — stronger at 150 GHz as
expected — that cancels in the pair difference. The dark channels (bottom panel) monitor
the instrumental noise and show no evidence for contamination from stray light or focal
plane temperature variations. Figure from Pryke et al. (2009)
90
−2
pair sum (arb power units)
10
−4
10
−6
10
−2
pair diff (arb power units)
10
−4
10
Science band
−6
10
−1
10
0
10
Frequency (Hz)
1
10
Figure 3.8: Timestream power spectral densities (PSDs) measured per detector per half scan
and then averaged over a single day of data for 100 GHz (red) and 150 GHz (blue) detectors.
The PSDs are noise dominated — there is negligible sensitivity to CMB in a single scan —
and show the effects of atmospheric 1/f noise at low frequencies and at high frequencies,
microphonic line noise and the low pass filter rolloff. The “Science band” demarcated by
the dotted lines is the frequency range that corresponds to multipoles 200 < ` < 2000 (see
text). Figure from Pryke et al. (2009)
91
the timestream noise is very close to white within the Science band. In the map making
stage the atmospheric component is suppressed via a 3rd order polynomial filter. At higher
frequencies, some channels show narrow line noise that is likely due to microphonic pickup
in the receiver electronics; at the highest frequencies the PSDs rolloff with the 5 Hz lowpass
filter.
3.3
Mapmaking
In this section I describe the process of transforming from processed timeseries data into
maps co-added over all detectors of a given frequency.
3.3.1
Formalism
In practice the mapmaking procedures in the QUaD analysis come down to a rather straightforward and intuitive set of filtering and binning operations. Yet, there is a widely adopted
mathematical framework for phrasing these procedures in terms of matrix operations that I
would be remiss to skip in this discussion. Therefore, before detailing the QUaD mapmaking
procedure in the following sections I will first briefly introduce this formalism based loosely
on Stompor et al. (2002); Hivon et al. (2002); Tegmark (1997) and references therein.
In the simplest case of a beam convolved, pixelized, true sky map mp , the timestream
data (or time-ordered data, TOD) can be expressed as
dt = Amp + nt
(3.5)
where A is the pointing matrix that maps a given sky pixel to a TOD sample dt ∈ dt and
nt is noise.1 The pointing matrix is not one-to-one — a given pixel may be revisited many
1. For an experiment with multiple pixels, a single pointing matrix can be constructed for all pixels, or
the maps may be coadded after the fact.
92
times in a highly redundant observation strategy.
A generic solution to this equation is
m̂p = Ldt ,
(3.6)
where L is a binning operator that maps the TOD to the estimated map m̂p and it is
convenient to express this matrix as
L = [AT MA]−1 AT M,
(3.7)
so that LA = I, the identity matrix. The optimal solution to the map equation is found
by choosing M = Nt −1 , where Nt = hnt nt T i is the TOD noise covariance matrix. The
optimal map and pixel noise matrix is then given by
m̂p = [AT Nt −1 A]−1 AT Nt −1 dt
(3.8)
Np = [AT Nt −1 A]−1 .
(3.9)
In the case of the extremely large QUaD dataset, and in the context of the MASTER
analysis method that uses Monte Carlo (MC) simulations to characterize the noise, solving
these equations exactly becomes intractable and unnecessary. Instead a simplification can
be made by high-pass filtering the TOD to reduce the long time-scale noise correlations,
which for QUaD are primarily due to atmosphere (see Section 3.2.4 and Figures 3.7 & 3.8).
In the ideal case the TOD noise matrix elements become simply Ntt0 = Ntt δDirac (t − t0 )
and eq. 3.8 reduces to
m̂p = Op−1
X
Apt h(t − t0 )dt0 ,
(3.10)
tt0
where m̂p are the estimated map pixels, Op = AT A is the number of TOD samples in a
93
given pixel and h(t − t0 ) is the high-pass filter. Equation 3.10 is effectively an elaborate way
of writing “bin the filtered samples by their two-dimensional coordinates” and is effectively
what is done for QUaD with some important additional steps discussed in the following
sections.
3.3.2
Telescope Pointing
Clearly a fundamental component of the mapmaking equations in Section 3.3.1 is the telescope pointing. Moreover, the relative pointing offsets of each individual detector in the
array must also be identified.
The telescope pointing model that maps RA/Dec coordinates to the azimuth, elevation
and deck angle position of the telescope was established and calibrated by comparing data
between a small optical telescope mounted to QUaD against radio observations of a collection
of bright sources. Day-to-day the telescope pointing accuracy was monitored via cross-obs
of RCW38. The resulting timestream was fit with one dimensional Gaussian profiles to
determine the az/el centroid offsets.
The cross-obs were done in conjunction with rowcals which proved useful in monitoring
many important array parameters such as the detector spacings, the telescope pointing
and the beam main-lobe widths. Again, the timestream from the rowcals was fit to onedimensional Gaussian profiles to measure both the centroids and the widths of the observed
blips.
The individual detector pointing was also measured with monthly, dedicated full-day,
two dimensional raster-map beamrun observations of RCW38. The maps are fit with twodimensional elliptical Gaussian profiles to determine centroid offsets and monitor the shape
and width of the beams for each pixel.
All of these methods show consistent results for the pointing accuracy and stability;
94
overall there tends to be an ≈ 0.5 arcmin (random) wander in the pointing within a single
day of observation but changes in the individual detector offsets are negligible. See Chapter 4
for details on the beamrun and rowcal observations.
3.3.3
Ground Template Removal
Thus far only tangentially referred to: QUaD suffers from substantial azimuth dependent
ground pick-up. Though it is not evident in an average individual scan — there is no visible
az-synchronous signal in the scans in Figure 3.7 — it becomes obvious in the final co-added
maps. Originally, the lead/trail scan strategy (discussed in Section 3.1) was implemented to
mitigate against such ground contamination. By differencing two fields observed over equal
√
azimuth ranges, common ground signal cancels out at the expense of a ∼ 2 hit in the signal
to noise from the reduction in effective observed sky area.
Yet it is possible to exploit additional degeneracies in the QUaD scan strategy to simultaneously solve for both CMB and ground components in the sky signal without resorting to
the sub-optimal differencing. Here I describe the procedure for estimating a ground template
that can be used to remove the contamination, closely following the discussion in Brown et al.
(2009).
Each lead/trail field observation is composed of four, six-minute scan-sets of ten halfscans each, separated by a 0.02◦ step in elevation (see Section 3.1 for more detail). During
a half scan the telescope is modulated 7.5◦ in azimuth — at a fixed elevation — over the
sidereal tracking of a fixed RA field center. Over the course of the six minutes needed to
complete a single scan-set, the sky rotates 1.5◦ in azimuth across the fixed ground. This
allows for separating the sky and ground signals on scales < 1.5◦ or ` > 200. Furthermore
the lead/trail observations can still be used to improve the separation.
Formally, we can express the sky, ground and noise components of the timestream signal
95
as
dt = Acmb mp + Ag gα + nt + µscan ,
(3.11)
where mp is the RA/Dec dependent sky signal, gα is the azimuth dependent ground signal
and the noise has been divided into two components: a large-angular-scale 1/f offset µscan
and the rest, nt . Equation 3.11 is just the map equation 3.8 with a new component, i.e.,
mp → mp + g α
A → (Acmb , Ag ),
(3.12)
(3.13)
where Acmb maps RA/Dec coordinate pixels to TOD samples and Ag maps az/el coordinate
pixels to TOD samples. Since pointing matrices are not one-to-one, the true sky and ground
maps need not have the same pixelizations.
The new map equation is soluble exactly only in the case of full redundancy between Acmb
and Ag . For the QUaD scan strategy the large angular scale ground and sky components are
degenerate on large-angular scales2 resulting in large gradients in the final maps. Instead,
the ground can be estimated independently and then subtracted from the TOD in eq. 3.11
to leave only sky and noise, i.e.
dt 0 = dt − Ag ĝα .
(3.14)
The ground is estimated in two steps as follows. First, in order to suppress large gradients,
µscan is estimated as the mean over a given half-scan and is subtracted from the TOD of
that half-scan; where the mean is evaluated over regions of the TOD that overlap in azimuth
in a scan set to prevent biasing the result. Second, a ground template for a single scan set
2. The ground template technique is no better than the field differencing for ` < 200 but the separation
of sky and ground is nearly perfect by ` ≈ 1000.
96
is estimated from the TOD binned in azimuth,
ĝα =
1 X g A
d
−
µ
scan ,
αt t
Og
(3.15)
t∈s
where s is a lead/trail scan-set pair and this expression is akin to eq. 3.10 — subtracting
off µscan acts as a high-pass filter. The pixel size for the ground template ∆α = 0.1◦ ; the
ground is smoothly varying in azimuth and this coarse binning is sufficient to describe it.
The template is estimated and subtracted per detector, for a common lead/trail scan-set
pair.
The expectation value of the ground template estimate is found by plugging equation 3.11
into equation 3.15 and gives
"
#
X g
1 X g cmb
Aαt Atp mp +
Aαt nt .
hĝα i = gα + g
O
t∈s
(3.16)
t∈s
The terms in brackets on the right will go to zero for uncorrelated noise and a highly redundant scan strategy.3
3.3.4
Putting It All Together
The mapmaking proceeds from the ground template subtraction to the final co-addition
of data for all detectors at at a given frequency. Before binning into pixels the data are
first filtered to remove large-angular scale 1/f noise associated with the atmosphere by
subtracting a best-fit 3rd -order polynomial from every half-scan in the TOD. Consulting the
noise spectra in Figure 3.8, this step substantially reduces the low frequency correlated noise
and whitens the Science band. In terms of the previous mapmaking formalism this acts as
3. The CMB sky signal in not correlated with azimuth. Also, the QUaD scan strategy is far from highly
redundant and this term acts as an additional filter.
97
a high-pass filter that simplifies the map equation to the form in equation 3.10.
The final map is constructed as
mp =
1 XXX
Apt00 Wt200 t0 Ft0 t d0t ,
W
O
00
0
t
t
(3.17)
t
where F is the polynomial filter, W is a weighting matrix and OW is the sum of the weights,
OW = WT W. The weighting matrix is given by
2 = v −1 e(t/s) δ
0
Wtt
0
s
Dirac (t − t ),
(3.18)
where e(t/s) is some function which depends on the fractional position of a sample within
a half-scan — it goes to zero at either end of the half-scan — and vs is the variance of the
timestream for the half-scan of the given sample.
The weighting accomplishes two goals.
First, after poly-filtering the half-scans the
timestream from partially overlapping scans, from offset detectors, will have slightly differing levels of large-angular scale CMB signal removed because the polynomial filter acts on
CMB and atmosphere alike. When these data are co-added into the map a step in intensity
appears at the edges of the scans, which has the effect of up-mixing large-angular scale power
into small-angular scales. The weighting function suppresses this effect without biasing the
map pixels.
Second, the half-scan variance vs is a fairly accurate measure of atmospheric noise level;
including it in the weight serves to downweight noisy data in the final map. The vs values
are also correspondingly binned into a “variance map” vp that provides an estimate of the
gross distribution and level of noise in the final maps. In practice, the variance maps reflect
the noise in the maps to within 10% (20%) accuracy at 100 GHz (150 GHz).
The resulting CMB temperature and variance maps are shown in Figure 3.9 — already
98
calibrated from Volts to temperature units of µK2 — along with the deck-jackknife map.4
The absolute calibration methodology is described in the following section, followed by a
description of the jackknife mapmaking procedure.
3.3.5
Absolute Calibration
The absolute calibration factor — which scales the maps from Volts to µK — is derived
following the technique used by the Boomerang experiment (B03) and described in Masi et al.
(2006). The calibration is calculated in the Fourier domain by comparing modes between
the QUaD maps and B03 deep field maps, which overlap with QUaD. Before calculating the
absolute calibration, the unfiltered B03 maps are first passed through the QUaD simulation
pipeline (Section 3.5) to duplicate the QUaD field coverage and filtering.
The calibration factor in µK/V is then given by
D
E
quad
Bb
m̃b03x · m̃∗b03y
D
E b,
ηb =
b03
∗
quad
b03x
m̃
· m̃
Bb
(3.19)
b
where m̃ and m̃∗ are the Fourier transform (FT) of the map and its complex conjugate
so that the terms inside the brackets are the cross power spectra, which are averaged over
`-bins b.5 There are two noise independent B03 maps used in this analysis, denoted by x
and y, one as reference and the other as a calibrator. The B terms in front are the respective
experiments’ beam transfer functions (see Section 5.1) that correct the averaged modes
for beam suppression. The final calibration number is taken to be η̄ = hηb i averaged over
200 < b < 800.
4. The timestream of the PSB detector pairs are added together before constructing the map. This is
only done to maintain symmetry with the polarization maps where the timestream from the PSB pairs is
differenced prior to constructing the maps.
5. The same bins used for spectra calculation in Section 5.1
99
Declination [deg]
−46
75
−48
−50
0
−52
−75
−54
Declination [deg]
−46
75
−48
−50
0
−52
−75
−54
Declination [deg]
−46
1500
−48
−50
1000
−52
500
−54
90 88 86 84 82 80 78 76 74
Right Acension [deg]
90 88 86 84 82 80 78 76 74
Right Acension [deg]
Figure 3.9: The QUaD 100 GHz (top left) and 150 GHz (top right) temperature maps
— the color scale is in µK2 . The maps have been smoothed with a 5’ FWHM Gaussian
kernel for presentation purposes only. The deck-jackknife maps (middle) are a stringent test
for systematic contamination in the data. Visually the maps appear to be free of signal
as expected. The variance maps (bottom) describe the gross pixel noise distribution and
coverage of the field.
100
The B03 maps were calibrated in the same way using the WMAP 1-year results (Bennett
et al., 2003) with a stated uncertainty of 2%, which carries through to our analysis; the
change in calibration between WMAP 1-year and 5-year (Hinshaw et al., 2009) results is
accounted for. Various potential sources of uncertainty in the calibration are considered:
the calibration is simulated assuming the B03 coverage and noise properties for random
realizations of CMB, the known pointing offset between B03 and QUaD is dithered in each
cardinal direction, the B03 and QUaD beam transfer functions are varied by their respective
uncertainties and the random fluctuation in ηb about its mean value η̄ is accounted for. The
net calibration uncertainty from all these sources added in quadrature is 3.5%.
3.3.6
Jackknife Maps
Jackknife6 maps are the difference of two maps constructed from independent but equivalent data sets; baring any systematic differences, all signal should cancel leaving only noise
consistent with zero — a null test. The jackknife maps and corresponding variance maps
are calculated as
1 J1
J2
J
m −m
m =
2
1 J1
J2
J
v +v
v =
4
(3.20)
(3.21)
where the J denotes the jackknife and the numbers denote the two datasets. For this analysis
there are four such maps that are considered and though the null tests are actually evaluated
in the power spectra, I will summarize the significance of each jackknife here.
6. This is a non-standard use of the term jackknife, which often means calculating statistics from subsamples excluding a single member of the set. Yet, the usage here is common in CMB analysis and was used
for all QUaD publications.
101
Deck Jackknife
As discussed in Section 3.1, the daily observations are split between two equivalent 8-hour
blocks that are each observed at a different deck-angle, with the entire telescope being rotated
by ∆dk = 60◦ about the pointing axis. Deck jackknife maps made from data grouped by
the two observation blocks contain the same sky observed at two different ranges in both
azimuth and time. Furthermore the telescope’s mechanical state is dramatically changed
with potential for flexure in the foam cone and shifting of cryogens. There is much room
for systematic failure from effects such as ground signal or changes in the beam. The deckjackknife maps are plotted in Figure 3.9.7
Scan Direction Jackknife
Again, as discussed in Section 3.1, the telescope scans both forward and backward in azimuth
with each direction corresponding to a single half-scan. The scan direction jackknife maps are
constructed from half-scans of only the forward or backwards directions. Being that the halfscans take 30s to complete, they are unlikely to be sensitive to any external contamination
but will be sensitive to internal mechanical or analysis issues. Most importantly, especially
for small-angular-scale results, this jackknife is a sensitive probe of the accuracy of the
deconvolution over the long-term.
Split-Season Jackknife
The split season jackknife is the difference of two maps made from the first and second halves
of the full set of days (two 8-hour observation blocks) used in the analysis. The 2006/07
austral winters are treated as a continuous set; the split is actually near the end of the 2006
7. A faint residual can be seen but only at the largest scales below the Science band i.e. ∼ ` < 200 or
∼ θ > 2◦ . The ultimate test of the jackknives is in the spectrum.
102
season. Between the two seasons, the telescope and receiver undergo basic maintenance
procedures. Thus this jackknife tests for both long term stability of the calibration, beam,
etc. and also verifies consistency between the two seasons.
Focal Plane Jackknife
For the purposes of a high-` temperature analysis, this is perhaps the least stringent jackknife
test. The PSBs are equally distributed into two orientation groups rotated with respect to
each other by 45◦ . The focal plane jackknife is the difference of maps made from each of
these groups. Since the distribution of the orientations on the focal plane is well distributed,
it is hard to conceive of any systematic effect that would effect one group and not the other
and that would be manifest in total intensity maps.
3.4
Point Source Identification
The dominant sources of total power in the final QUaD maps are unquestionably CMB,
noise and point sources, roughly in that order. The point source power for the QUaD field
comes predominantly from a small number of bright sources observed with high significance
and must be masked or removed from the maps prior to estimating power spectra. Given
that a point source convolved with the beam will appear as simply beam shaped (effectively
a Gaussian), there is a straightforward way to identify them.
3.4.1
Two Dimensional Fourier Plane Optimal Filtering
To separate out the point sources in our maps I filter the maps with a two-dimensional
Fourier space optimal filter (Weiner filter) given by
F=
S pnt
,
S pnt + S CM B + N
103
(3.22)
where the factors on the right are two-dimensional auto power spectra (2daps) for: the point
source signal S pnt , the CMB signal S CM B and the instrumental noise N . The 2daps are
given by
S CM B = |m̃|2 .
(3.23)
The CMB plus noise 2daps are estimated from the average of the signal plus noise simulation maps discussed in Section 3.5. I fix S pnt to be a beam suppressed flat (white-noise)
power spectrum representing a uniform distribution of point sources. The template amplitude at ` ∼ 2500 is set roughly equal to our unmasked bandpower value at that multipole
— an upper-limit to the source power. The result is not very sensitive to the amplitude of
the source power.
Shown in Figure 3.10 are the 2daps for signal plus noise, the point source template and the
resulting broadly azimuthally symmetric, two-dimensional filters. The dark vertical bands
running along the y-axis of the signal plus noise 2daps correspond to the TOD polynomial
filtering and were artificially imposed on the point source signal template to mimic what
should be present in the real maps. The CMB plus noise 2daps feature the dominant CMB
signal at low multipoles, which decays and is overcome by noise power at larger multipoles.
The non-uniform distribution of noise is evident as the symmetric circular spots, due to
channel-channel correlations, and vertical wedges at high-ell along the y-axis, which correspond to atmospheric noise. The filters are non-zero in the multipole range where we are
most sensitive to point source power — zero at the lowest multipoles where CMB signal
is dominant then rising to a peak near ` ∼ 2000 for 100 GHz and ` ∼ 2500 for 150 GHz
before decaying back to zero at the highest multipoles where instrumental noise is dominant
(Figure 3.11).
104
100 GHz: 2!v
3000
0
240
24
0.75
160
16
0.5
80
8
0.25
240
24
0.75
160
16
0.5
80
8
0.25
−3000
150 GHz: 2!v
3000
0
−3000
−3000
0
2!u
3000
−3000
0
2!u
3000
−3000
0
2!u
3000
Figure 3.10: The two-dimensional fourier space auto power spectra (2daps) for CMB plus
noise (left) estimated from the signal plus noise map simulations (see text in Section 3.5),
the template point source signal (center) and the resulting optimal filter given by eq. 3.22
(right) for both 100 GHz (top) and 150 GHz (bottom). The spectra are scaled radially by
`(` + 1)/2π.
The filtered map is then given by
FT F · FT a · m
f=
a
,
(3.24)
where a is the appodization mask. In practice, it is necessary to appodize m prior to the FT
to reduce the ringing due to the sharp map boundary and to down-weight noisy edge regions.
This mask is carried through the FT and is also subject to the filter; after back-transforming
the filtered map it is divided out of the filtered map. This is formally incorrect in that there
is mode mixing between the mask and the map during the filtering, but since the filter kernel
is compact and the appodization mask is very broad in the image space, there should be
105
100GHz Noise
100GHz Signal
150GHz Noise
150GHz Signal
3
l(l+1)/2! Cl
10
2
10
1
10
0
Normalized Filter
10
100GHz
150GHz
1
0.5
0
0
500
1000
1500
2000
Multipole
2500
3000
3500
Figure 3.11: One-dimensional radial profiles of the two-dimensional auto-power spectra and
optimal filters shown in Figure 3.10. The radial profiles are only presented as a visual aid
and aren’t used in this analysis.
little interaction between the two and the appodization should roughly cancel.
The resulting filtered maps for 100 GHz are shown in Figure 3.12. These filtered maps
alone are not useful for systematic source identification — they must first be normalized by
the noise to make signal-to-noise maps.
106
100 GHz filtered T map [Jy−1]
−46
500
−47
400
300
−48
200
−49
100
−50
0
−100
−51
−200
−52
−300
−400
−53
−500
−54
90
88
86
84
82
80
78
76
74
100 GHz signal to noise map
−46
6
−47
4
−48
2
−49
−50
0
−51
−2
−52
−4
−53
−54
90
88
86
84
82
80
78
76
74
−6
Figure 3.12: The progression from the filtered map to the signal-to-noise map (see text).
During the filtering step the map is appodized to prevent ringing at the map boundary
(top). The appodization mask is divided out of the filtered map and then it is divided by
the pixel noise map to create the flattened signal-to-noise map (bottom).
107
3.4.2
Signal-to-Noise Cleaning
After filtering, the signal-to-noise maps are constructed as follows. First the maps are “noiseflattened”, i.e.
s0 = fv−1/2 ,
(3.25)
where v is the pixel variance map, estimated from the TOD rms at the map making stage as
described in Section 3.3. Though the filtering operation changes the noise amplitude in the
map and smoothes beam scale noise variations, v still provides information about the large
scale spatial noise distribution in the filtered map. This ensures that no part of the map is
biased with respect to pixel significance, which is most important for the source detection.
Note that v is actually used as the appodization mask (a = v−1 ) in eq. 3.24 and is also
used to appodize the map in the spectra calculation described in Section 5.1. The process
of going from f to s0 is illustrated in Figure 3.12.
In the second step, the noise-flattened map s0 is rescaled so that the 16th percentile
point of its pixel distribution is equal to −1 to make it a signal-to-noise map s, which
would be precisely correct for a map with Gaussian distributed pixel values. For the filtered
QUaD maps this is effectively correct since both the instrumental noise and CMB (also
considered noise here) are both Gaussian distributed. Only the point source contribution is
non-Gaussian but the power from compact point sources is limited to a very small number
of pixels and will not skew the distribution significantly.8 The pixel distribution for the real
100 GHz filtered signal to noise map plotted in Figure 3.13 corroborates this point.
To identify sources in s, a so-called CLEAN algorithm is used: a filtered-source template
b — constructed from the back-transform of F — is subtracted from the highest peak in s
8. In the case of extreme source contamination this procedure can be iterated by first removing the
brightest sources and then recalculating the scaling.
108
100 GHz raw signal to noise map
pixel distribution
6
10
map
−46
map pixels
h=1,c=0,w=1 gaussian
6
5
10
−47
4
4
−48
10
2
3
10
−50
0
N
Dec
−49
2
10
−51
−2
1
10
−52
−4
−53
−54
0
10
90
88
86
84
82
RA
80
78
76
74
−6
−1
10
−5
0
5
pixel value
10
100 GHz cleaned signal to noise map [Jy−1/2]
pixel distribution
6
10
map
−46
map pixels
h=1,c=0,w=1 gaussian
6
5
10
−47
4
4
−48
10
2
3
10
−50
0
N
Dec
−49
2
10
−51
−2
1
10
−52
−4
−53
−54
0
10
90
88
86
84
82
RA
80
78
76
74
−6
−1
10
−3
−2
−1
0
pixel value
1
2
3
Figure 3.13: The 100 GHz signal to noise map (top left) normalized by the nearly Gaussian
pixel distribution (top right) before removing point sources. After removing point sources
(see text) with σ > 4 from the map (bottom left) the residual pixel distribution (bottom right)
is rendered almost perfectly Gaussian. Gaussian profiles are overplotted (red) for comparison
with the pixel distributions.
and this process is iterated down to a threshold value n. The source template is given by
b = FT F · B ,
(3.26)
where B is the beam transfer function discussed in Chapter 4. Though the filter function
109
and beam transfer functions are both compact in the image plane, they do have significant
and extended sidelobe structure.
This procedure results in a source free s-map with pixel values that are close to Gaussian
distributed in the range ±nσ, and a catalog of ≥ nσ sources, where σ ≈ 10 mJy at both
frequencies. Figure 3.13 illustrates the result of this procedure for n = 4. The catalogs are
used to mask out sources in the maps at the spectra calculation stage. In practice n = 5
produced a reliable set of sources without any false positive detections.
Seven point sources — listed in Table 3.1 — are detected at > 5σ in each of the 100 and
150 GHz maps (with six in both); all of these were matched with a low-probability of chanceassociation to PMN (Gregory et al., 1994) or SUMSS (Mauch et al., 2003) radio sources
using NED.9 Since radio point source populations typically follow a power law distribution
these sources are only the sparse high flux end of an exponentially more numerous low flux
population. It is therefore necessary to estimate the power contribution from the low-flux
radio point source population. For this we explicitly simulate the source population as
discussed in Section 3.6.
3.5
Map Simulation
As previously stated the QUaD analysis relies on simulations to correct power spectra estimated from the final maps (Figure 3.9) for noise bias, filtering and beam convolution. To this
end the entire QUaD observation, low-level data reduction, and mapmaking pipeline is simulated. The simulations are based on properties of the receiver, which are measured through
special calibration tests discussed in this chapter and Chapters 2 and 4. This produces maps
of random realizations of the sky that are comparable to the real maps.
There are three basic simulations generated: the (CMB) signal, the (atmospheric and
9. http://nedwww.ipac.caltech.edu/
110
Table 3.1. Sources Detected in 2006/07 QUaD Maps (at > 5σ)
RA [deg]
Dec [deg]
81.57
89.56
87.44
85.61
77.65
88.38
83.00
87.78
-48.51
-50.51
-52.77
-51.73
-47.81
-48.25
-48.47
-49.23
Fluxa[mJy]
100 GHz
150 GHz
401±12
219±15
225±16
127±11
75±11
71±11
71±14
—
299± 9
168±11
158±11
93± 9
57± 9
53± 8
—
46± 9
Cross Match
PMN J0526-4830
PMN J0558-5029
PMN J0549-5246
SUMSS J054223-51425
PMN J0510-4748
PMN J0553-4815
PMN J0531-4827
PMN J0551-4915
a Errors do not include 5% absolute calibration uncertainty
instrumental) noise and the signal plus noise. Both the signal and noise are simulated
at the timestream level and then mapped using the mapmaking procedure — including
ground template subtraction and polynomial filtering — discussed in Section 3.3. The final
signal plus noise maps are constructed by summing the signal and noise at the map level;
summing either the timestream or the map is equivalent since the entire mapmaking process
is linear. A fourth component, the radio source foregrounds identified and masked out to a
minimum signal to noise threshold in the previous section (3.4), must also be accounted for
and are discussed in more detail in the following section. Five hundred simulation maps per
component are generated, amounting to 2000 maps for the signal and each of the jackknives,
10 thousand in total.
3.5.1
Signal Simulations
The signal-only simulations are based on the WMAP 5-year model given in column two
of Table 2 in Dunkley et al. (2009) with zero SZE signal assumed and hereafter referred
111
to as ΛCDM. Power spectra for this model are generated using CAMB10 with foreground
gravitational lensing turned on. The resulting spectra are then input into the “synfast” sky
generator — part of the HEALPix11 software package — to generate curved sky realizations of the CMB with an angular resolution of 0.4 arcminutes (Nside = 8192). These sky
realizations are then resampled to a flat grid with an angular resolution of 1.2 arcminutes.
The maps are then smoothed by the measured beam profiles of each detector (see Chapter 4) and the simulated timestream is constructed by interpolating off the maps using the
pointing information. This is directly related to the map equation 3.8 with zero noise nt .
The random pointing wander discussed in Section 3.3.2 is simulated by generating random
Gaussian distributed start and end RA/Dec offsets (common to all detectors) with an rms
of 0.5 arcminutes for each day, then linearly stepping the pointing offset between them from
scan to scan. This results in a small broadening of the beam, and is consistent with what is
observed in the final maps.
3.5.2
Noise Simulations
The noise simulations are a substantially more complicated process because the noise must
be estimated from the timestream data itself; the goal being to completely reproduce the
noise properties of the data on all scales. Though the signal to noise in the timestream
is insignificant (the ΛCDM power spectrum is two orders of magnitude weaker than the
timestream PSDs) the ground contamination necessitates that the noise be estimated using
a cleaned version of it.
Ideally the ground template removed timestream from equation 3.14 would be used. Yet
the expectation value of the ground template — equation 3.16 — contains terms that do
not go to zero for the QUaD scan strategy and depend on the CMB signal and noise. Thus
10. see http://camb.info and Lewis et al. (2000)
11. see http://healpix.jpl.nasa.gov and Górski et al. (2005)
112
the ground template removal also effectively acts as a filter and changes the noise properties
of the timestream. Fortunately a simple way around this problem is to use lead/trail field
differenced timestream to estimate the noise, which eliminates any ground signal stationary
on half-hour time-scales.
To simulate noise the timestream data is Fourier transformed to calculate power spectra
(as shown in Figure 3.8) per detector per continuous scan-set (ten half-scans). The spectra
are then binned logarithmically and the covariance is calculated between all the detectors’
spectra
Mij = hd˜f,i d˜f,j i,
(3.27)
where i and j label the detectors and f is the frequency bin. The (complex) covariance matrix
M is Cholesky decomposed and applied to a set of Gaussian random numbers to regenerate
the correlations between channels for each frequency bin but not between frequency modes.
The simulated timestream is generated for the entire field differenced scan-set and separate
realizations of the same spectra are made for lead and trail pairs. The simulated timestream
is then split up into half-scans, which actually reintroduces the frequency mode correlations.
3.6
Simulating Point Source Foregrounds
The effect of residual point source contamination in our spectra will manifest as both an
increase in the total power at a given ` and an increase in the bandpower fluctuation.
Though it would be straightforward to estimate the mean power contribution from a given
point source model analytically the subtle effects of source identification and masking in the
analysis pipeline would be difficult to account for accurately. Moreover, the fluctuations are
also potentially non-Gaussian. So instead, the source population is simulated in the maps.
The statistical properties of the sub-Jansky radio source population are not well known
at 100 and 150 GHz. The most useful published data is the W-band (94GHz) WMAP
113
point source catalog (Wright et al., 2009). However, this catalog is only complete at fluxes
at or above several Jansky whereas even the brightest sources in the QUaD maps are all
sub-Jansky. Therefore I make use of theoretical models to predict the source distribution
below our source detection threshold. In particular, I consider the de Zotti et al. (2005)
extragalactic radio point source model, which was calibrated to the WMAP 5-year sources
(González-Nuevo et al., 2008) but below 100 GHz. Yet, though it is carefully constructed
using detailed astrophysics, the calibration at lower frequencies makes it a somewhat distant
extrapolation at 150 GHz. Thus I only adopt it to inform the overall shape of the radio
point source flux distribution dN/dS, the differential number of sources in a given flux bin.
The De Zotti distribution can be simply approximated as a broken power law


 S −γ1
forS < S0
dN
,
∝

dS
 S (γ2 −γ1 ) S −γ2 forS ≥ S0
0
(3.28)
where S is the source flux, S0 is the break flux where the power law turns over and γ1 and
γ2 are the two power law slopes; for the 100 and 150 GHz models used in this analysis,
S0 ≈ 1Jy, γ1 ≈ 2 and γ2 ≈ 5/2. We use a source distribution derived from the sources
detected in QUaD maps (Table 3.1) to rescale the de Zotti model.
The QUaD distribution is generated by binning the sources in logarithmic flux bins in
order to ensure sufficient numbers per bin — appropriate for a power law distribution — and
the uncertainty on the dN/dS points are poisson based errors calculated following Gehrels
(1986). The source distribution must be normalized by the size of the observed field but the
QUaD coverage is non-uniform so the area will depend on the given flux bin — the sensitivity
varies across the map — and the detection threshold. The effective area at a flux S is given
by,
fp
A = Ap · N n ·
<S
sp
114
(3.29)
log(S5/2dN/dS [Jy3/2/sr])
2
1
de Zotti 100 GHz
de Zotti 150 GHz
rescaled de Zotti 100 GHz
rescaled de Zotti 150 GHz
QUaD 100 GHz Sources
QUaD 150 GHz Sources
WMAP W−Band Sources
0
−1
−3
−2
−1
0
1
2
log(S [Jy])
Figure 3.14: The simulation input point source model (solid lines) is derived by applying a
simple rescale factor to the de Zotti radio point source model (dashed lines) at each frequency.
The rescale factor is determined from fits to dN/dS points derived from sources detected in
our maps (closed circles). For 100 GHz, the re-scaling also brings the de Zotti model into
better agreement with WMAP W-band dN/dS points (open circles).
where f and s are the filtered map (in units of Jansky) and signal-to-noise map respectively
(Section 3.4), the subscript p denotes individual map pixels, Ap is the area of a pixel, n is
the signal to noise threshold used and N(xp < S) is the number of x-map pixels less than S.
The source distribution is then given by
N (Si < S ≤ Si+1 )
dN
=
dS i
Ai ∆Si
(3.30)
where ∆Si = Si+1 − Si is the width of the flux bin and N is the number of sources within
the bin.
115
The resulting distribution points and their associated errors are plotted in Figure 3.14
compared with both the De Zotti models for 100 and 150 GHz as well as crude dN/dS points
derived from the WMAP W-band (97 GHz) point source catalog (Wright et al., 2009), using
only high signal to noise detections and assuming uniform sky coverage outside the galaxy
cut. While the De Zotti models do a good job at matching the gross properties of the source
distributions from QUaD and WMAP, they tend to over-predict the source density. Thus the
QUaD points are used to fit for a rescale factor k that is applied to the models, determined
by minimizing the likelihood L(k|ni ) of the model given the data assuming the points are
Poisson distributed, i.e.
L(k|ni ) = k ·
Y µni eµi
i
i
ni !
,
(3.31)
where ni = dN/dSi and µi is the De Zotti model expectation for flux bin i. This results in
k = 0.7 and 0.6 at 100 and 150 GHz respectively (see Figure 3.14). The rescale factors also
bring the models into better agreement with the dN/dS points determined from the WMAP
catalog.
For point sources, unlike the CMB, I do not simulate the sky signal at the timestream
(TOD) level. Instead I directly inject point source populations into existing signal plus
noise maps as beam sized blips. The injected source fluxes are drawn from a probability
distribution given by the scaled De Zotti models and the source positions are drawn at
random from a uniform probability distribution filling the map area, i.e. no clustering is
assumed. Since the source flux distributions are different at 100 and 150 GHz (predominantly
in amplitude), this procedure results in different populations in the maps at each frequency.
To avoid conflict between the respective source masks down the line in the spectra estimation
stage, the source positions in both maps are paired from the brightest down, i.e. the first N
sources of the two lists correspond, where N is the lesser of N100 and N150 , the total number
of sources at each frequency. Though this is clearly un-physical, in practice the brightest
116
sources in the real maps do correspond and the gross spectral properties of the simulated
residual source populations are preserved.
Finally, the simulated maps are run through the source identification algorithm exactly
like the real maps resulting in simulated source catalogs for masking and a residual point
source component that reflects the model expectation.
117
CHAPTER 4
THE BEAM MODEL
The sensitivity of a telescope to incoming radiation as a function of angular offset from the
pointing direction is commonly referred to as the telescope beam. This can be visualized as
the intensity pattern that would appear on the sky if a collimated beam was to be projected
outwards from the telescope by placing a point source at its focal plane. The beam is a twodimensional response function which varies with distance from the focal plane and in the
far-field is often well described by a Gaussian near the pointing direction (i.e. the main lobe)
but will also exhibit ringing structure at larger angular scales (i.e. the near sidelobes). In
the parlance of smaller-wavelength observational astronomy (IR, optical, etc.), this is what
is usually referred to as the point spread function (PSF); the telescope resolution is typically
quoted as the full-width half maximum (FWHM) of the main lobe.
The total intensity measured by a telescope detector while pointed in some direction is
the convolution between the beam function centered at that point and the full sky,
Z ∞
(f ∗ g)(x) =
f (y)g(x − y)dy
(4.1)
−∞
where f is the beam function and g is sky intensity function (map). Via this convolution,
the beam will mix sky power from a potentially wide range of scales. As will be discussed in
more detail in Section 4.7, some sidelobes — generated by obstruction along the beam path
that leads to reflection or diffraction of the beam — can extend to very large angular scales.
These far sidelobes this can have the effect of contaminating the observation by integrating
over an unintended bright source, such as the Sun or Moon.
In the Fourier domain, a real-space convolution between the sky-power and the beam
118
becomes a simple multiplication between their respective Fourier transforms (FTs),
FT(f ∗ g) = FT(f ) · FT(g).
(4.2)
The FT of a beam function squared is commonly referred to as the beam transfer function,
in that it describes the transfer of sky power through the beam.1 In the simple case of
a Gaussian beam, the FT is still a Gaussian and the result of the beam convolution is to
leave the power at the lowest multipoles unchanged but exponentially suppress the larger
multipole sky power.
As discussed in 5.1.6, the beam suppression can be accounted for to recover the true sky
power. Since the suppression is exponentially increasing with multipole, accurate measurement of the small angular scale CMB power spectrum relies on an exquisitely well measured
beam. As an example, consider how an uncertainty in the beam main lobe width will translate into a systematic uncertainty on a measurement of the CMB power at a given multipole.
The translation from a shift in beam width to a shift in power at a given multipole ` is given
by,
S` =
2 2
W`
− 1 = eσb (δ +2δ)`(`+1) − 1
0
W`
(4.3)
where S` is the systematic shift, W` is the beam transfer function (the square of the FT of
√
the real space beam), σb is the Gaussian beam width (FWHM/ 8 ln 2) in radians, and δ is
a fractional shift in σb between the two beam functions. Assuming a 5% uncertainty (i.e.
δ = 0.05) on a FWHM= 3.5 arcmin wide beam, this will lead to a systematic shift of 20%
in the power at ` = 3000 but goes down to 4% for δ = 0.01.2
Through it’s three observing seasons QUaD has grappled with a number of beam model
1. Regardless of whether it is actually commonly referred to as such, it should be and the beam transfer
function is defined in this way throughout this thesis.
2. The function is close to linear for small values with a slope of ∼ 4.5 for these values specifically.
119
complexities: a warped primary and de-focusing in the first season, resolved structure in the
astronomical source initially used for beam calibration, thermal expansion and contraction
of the foam cone and various far sidelobes. These complexities can be grouped into three
epochs,
Pre-Refocus: first-half of the first season (May05 - Jul05)
· RCW38 is used as the beam calibrator.
· The telescope is away from the ideal focus.
· The beams are elliptical due to a warp in the primary mirror.
· Large fluctuations in the beams caused by thermal expansion/contraction of the
foam cone and exacerbated by the de-focus.
Post-Refocus: second-half of the first season (Jul05 - Sep05)
· RCW38 is still used as the beam calibrator.
· The beam ellipticity persists.
· The telescope is manually brought closer to the ideal focus, reducing the strong
fluctuations in the beams.
Post-Correction: second and third seasons (May06 - Sep07):
· QSO PKS 0537-441 is used as the beam calibrator.
· The beam ellipticity is substantially reduced with the introduction of a corrective
secondary mirror.
· The telescope is maintained at the ideal focus via a remote controlled focus mechanism, installed behind the secondary mirror, which adjusts the mirror separation.
The following sections detail efforts to characterize the beam in each of these epochs.
Section 4.1 reviews the beam calibration sources, the observations tailored for beam calibration study and the basic analysis methods. Section 4.2 addresses the long term stability of
120
the beams and their temperature dependence. Section 4.3 details the main–lobe parameter
analysis using the calibration observations and Section 4.4 highlights the non-Gaussian near
sidelobes. Section 4.5 addresses the stability of the beam positions within the array and
the stability of the telescope pointing. Section 4.6 discusses the beam uncertainty due to
the main lobe parameter analysis, near sidelobes and the beam fluctuations. Finally Section 4.7 addresses the large angular scale features of the beam and the associated systematic
contamination of the data.
4.1
Beam Calibration Observations
Ideally the beam might be mapped in a laboratory setting by carefully observing controlled
sources of emission with a pre-assembled telescope. In practice, the disassembly and reassembly of the telescope will change the optical conditions of the telescope in unpredictable
ways. Furthermore, real-time changes in the telescope optical state (i.e. changes in the focus,
warping of the mirrors, etc.) can change the instantaneous beam profiles significantly. For
this reason observations of a beam calibrator are essential.
Given the design of the QUaD telescope and the extreme environment of the South Pole,
it was not practical to use a terrestrial source for calibrations, such as thermal sources on
towers or suspended from balloons. Instead the beam profile can be measured simply by
observing an astronomical point source; of course, this assumes the existence of such a point
source at the telescope frequency, resolution and sensitivity. Typically CMB experiments
operating at or near QUaD frequencies, such as ACBAR (Kuo et al., 2004), use planets as
effective calibrators.
Unfortunately the ground shield obscures the planets, which are always near the horizon
at the South Pole, and in their place QUaD initially used the compact HII region RCW38
as a beam calibrator; primarily because RCW38 is detected with high signal to noise in
121
a single scan, which made it an attractive source for quick and frequent beam calibration
observations. Based on previous observations of the object such as in Coble et al. (2003) it
was assumed that the core would be unresolved by the telescope and would dwarf any diffuse
background emission . Unfortunately it was confirmed well into the second observing season
that the core of RCW38 is in fact resolved by QUaD as is the large amount of surrounding
diffuse emission. Yet, the frequent observations of RCW38 still provide a good dataset to
measure the stability of the beams (the fluctuation of the main lobe width for example) even
though the full beam shape may not be well determined.
Bright (> 1 Jy) extragalactic radio point sources such as Active Galactic Nuclei (AGNs)
— easily observable by QUaD — are quite sparse at 100 and 150 GHz. Figure 4.1 shows the
distribution of such objects in the WMAP W-band point source catalog (Wright et al., 2009).
After RCW38, the next brightest source below Dec=-30 is PMN 0538-4405, also known as
PKS 0537-441 (as I prefer to call it), a variable BL Lac Blazar at z ∼ 0.89. Optical and IR
observations of PKS 0537-441 reveal it to be compact and isolated; all other galaxies within
approximately one arcminute of the AGN are roughly 6-8 magnitudes fainter (1000 times
fainter). Though it is a highly variable source, changing in flux by as much as a factor of 5 in
the span of one year, this has no impact on the derived beam profiles. Upon confirming that
RCW38 was not adequate for characterizing the beam the main-lobe shape, PKS 0537-441
was used as the beam calibrator instead.
QUaD employs two observational schemes for monitoring the beam. The first are full
day standard raster map observations of the beam calibration source, referred to as beamruns. These are done periodically throughout the observing season to monitor the full
two-dimensional beam profile. RCW38 was observed via beamruns in all three seasons but
was only used in the first season results. Nine beamruns using PKS 0537-441, all completed
at the end of the third season, were used for the second and third season results. The process
of measuring beam parameters from these maps is discussed in more detail in Section 4.3.2.
122
−20
DEC [deg]
J0403−3605 J0522−3628
−40
J0538−4405
RCW38
J0210−5101
J0253−5441
−60
Source
Isrc >= 0.5*Imax
−80
I
src
0
J0635−7516
/I
max
25
50
75
RA [deg]
100
125
150
Figure 4.1: Extragalactic sources from the WMAP W-band point source catalog (Wright
et al., 2009). The dashed line roughly represents the lower elevation limit of the telescope
due to the ground shield, the area of the orange circles represents the intensity of the given
source scaled to PMN 0538-4405 and only sources with at least half its flux are shown circled.
The galactic source RCW38 is also plotted here for comparison.
The second scheme relied on the rowcal observations. During a rowcal each detector is
scanned four times across RCW38 at a fixed elevation; measuring one-dimensional crosssections of the two-dimensional beam function that appear as blips in the TOD (Figure 4.4).
The rowcal blips are fit to Gaussian profiles. The peak amplitudes can be used to cross
calibrate the detectors but tracking the blip-widths from day to day is used to monitor the
long term beam width stability. This is actually a robust scheme for a quick measurement
of a two–dimensional Gaussian beam main lobe profile, as I will discuss in more detail in
Section 4.3.1.
For both observation schemes, I fit Gaussian profiles to the data using the function minimization and error analysis package MINUIT (well-known in the high energy and particle
physics community). MINUIT is optimized to find the minimum of a multi-parameter func-
123
tion, such as a χ2 or log–liklihood to determine the best-fit parameters, and to analyze the
shape of the function about its minimum in order to determine the parameter errors. I use
a χ2 function,
χ2 = (dt − m(p, xt )) C−1
(dt0 − m(p, xt0 ))
tt0
(4.4)
where dt is the rowcal scan TOD or a beamrun map data, m(p, xt ) is a model being fit to the
data with parameters p evaluated at coordinate xt (position, time, temperature, etc.) and
the covariance matrix C is either simply given by the variance of a TOD scan or by the full
pixel-pixel covariance matrix in the case of beamrun maps. The fit–parameter covariance
matrix M (the parameter errors) returned by MINUIT is the inverse of the second derivative
matrix of the χ2 function evaluated around the best fit parameters, i.e. the Fisher matrix F
*
Fij =
∂χ2 ∂χ2
∂pi ∂pj
+
p̂
(4.5)
where p̂ is the vector of best–fit (maximum–likelihood) parameters and M = F−1 . If the
data covariance matrix C correctly describes the uncertainty on the input data, the returned
fit–parameter covariance matrix M is properly normalized.
4.2
Beam Stability
When the telescope is at or near the ideal focus, small fluctuations in the primary/secondary
mirror separation will lead to small fluctuations in the beam width. As the focus drifts away
from ideal, the sensitivity of the beam width to the mirror separation increases exponentially.
For QUaD, the mirror separation is primarily determined by the height of the cone; as the
polar temperatures fluctuate with the weather from day to day, the cone will expand and
contract in response, thereby changing the mirror separation. These thermal fluctuations
will directly affect the beam stability. Conveniently, the external ambient temperature is
124
continuously measured by thermometers at a weather station on the MAPO building making
it is possible to characterize the beam width temperature dependence.
Figure 4.2 illustrates the temperature dependence of the two-dimensional elliptical Gaussian beam profiles for all three epochs derived from rowcal blip-widths (see Section 4.3.1).
Studying the figure, It is clear that there was a dramatic change in the beam width between the Pre– and Post–Refocus epochs when the telescope was brought closer to the ideal
focus; both the magnitude of the width and the degree of ellipticity were dramatically reduced. Going from Pre–Refocus to Post, the innermost 150 GHz channels changed from
an ellipticity of ∼ 2 — defined as simply the ratio of semi–major to minor axes — to an
ellipticity of ∼ 1.25, whereas the other channels became nearly circular. The magnitude
of the beam–width temperature dependence is clear. Pre–Refocus the innermost 150 GHz
channels’ width varied by as much 20% in the temperature range considered. Post–Refocus
this was still only reduced to ∼ 10%, with the other channels exhibiting similar reductions
in sensitivity to the thermal fluctuations of the cone. In the Post–Correction epoch a corrective secondary mirror was installed to compensate for the warped primary mirror and the
telescope was properly set to its ideal focus minimizing and circularizing the beam widths.
Figure 4.2 clearly demonstrates that this eliminated the sensitivity to thermal fluctuations
in the cone.
Figure 4.3 shows the resulting blip widths for all days used in the analysis in 2005 and
2006 after removing the temperature dependence of the beam. The Pre–Refocus residual
day–to–day fluctuations are still substantial at about 13% but are reduced to a more reasonable 5% Post–Refocus and ∼3% Post–Correction. This remaining fluctuation is no longer
correlated with the ambient temperature but is slightly larger than what would be expected
from measurement noise alone. It may still be driven by thermal fluctuations in the cone but
perhaps correlated with interior temperature fluctuations not accounted for in the temperature model. This residual fluctuation remains and informs our estimate of the uncertainty
125
Semi−Major Axis
Semi−Minor Axis
10
150 GHz Inner Channels Pre−Refocus
9
150 GHz Outer Channels Pre−Refocus
100 GHz Channels Pre−Refocus
150 GHz Inner Channels Post−Refocus
150 GHz Outer Channels Post−Refocus
Gaussian FWHM [arcmin]
8
100 GHz Channels Post−Refocus
150 GHz Inner Channels Post−Correction
150 GHz Outer Channels Post−Correction
7
100 GHz Channels Post−Correction
6
5
4
3
−80
−70
−60
−50
°
Temperature [ C]
−40
−70
−60
−50
°
Temperature [ C]
−40
Figure 4.2: This figure illustrates the temperature dependence of the beam width; both the
semi–major (left) and minor (right) axes of the elliptical Gaussian profiles are shown as
derived from rowcal blip widths (see text in Section 4.3.1). Individual measurements were
binned into roughly 3◦ C bins and then averaged over all channels at a common radial offset
from the pointing axis (the 150 GHz channels are the innermost and outermost rings and
the 100 GHz channels are in an intermediate ring, see Figure 2.7). The average width and
temperature dependence for the Pre–Refocus (filled squares), Post–Refocus (open squares)
and Post–Correction (filled circles) epochs are shown
126
Pre− and Post−Refocus Epochs
Post−Correction Epoch
10
150 GHz Inner Channels
150 GHz Outer Channels
100 GHz Channels
9
Blip−width [arcmin]
8
7
6
5
4
3
2
May05
Jul05
Sep05
Mar06
May06
Jul06
Sep06
Figure 4.3: This figure illustrates the day–to–day fluctuations of the beam main–lobe width
as measured by rowcal blip–widths. The temperature dependence of the beam has been
removed leaving only residual fluctuations not correlated with external ambient temperature
and mostly consistent with measurement noise. 4.6).
on the beam main lobe width (see text in Section 4.6). Still, Figure 4.3 demonstrates there
are no long term drifts in our beam widths within a given epoch.
127
4.3
Main-Lobe Parameters
4.3.1
From Rowcals
Though the rowcals are one–dimensional scans at deck–angles not necessarily aligned with
the beam semi–major or minor axes, they were used almost exclusively to construct the
temperature dependent beam model (see Figure 4.2 for temperature dependance). The only
additional information necessary were the beam orientations derived from beamruns and a
width normalization from a quasar in the the 2005 final full–co–added CMB intensity maps.
This section details the method used to derive the beam semi–major and minor axes from
rowcals.
The equation for a generic elliptical Gaussian profile is;
G(x) = ae−(x−x0 )B(x−x0 ) ,
(4.6)
where x is the cartesian coordinate position, i.e. (x, y), x0 is the centroid offset from the
origin, a is the scalar amplitude and B is a positive-definite matrix which scales the ellipse
axes and rotates them with respect to the coordinate axes x̂. In the case when B = (2σ)−1 I,
where I is the identity matrix, G is a simple two–dimensional Gaussian profile with width,
σ. In the case when B is given by
1
 2σx2

B=
0
0
1
2σy2


,
(4.7)
and σx > σy , G is an elliptical Gaussian with a semi–major axis width σx aligned along the
128
x–axis and a semi–minor axis width σy aligned along the y–axis, i.e.
G(x, y) = ae
y−y02
x−x2
0
−
2 +
2
2σx
2σy
(4.8)
The (messy) expression for an arbitrary rotation of the semi–major axis away from the x–axis
is given by

B=
cos2 φ
1 

2σx2
− sin22φ



sin 2φ
2
− sin22φ 
1  sin φ

2
+ 2 
.
sin 2φ
2φ
2σy
sin2 φ
cos
2
(4.9)
The 1–σ contour of G(x, y) aligned with the x–axis, given by eq. 4.8 (i.e. where G(x, y) =
ae−1/2 ) traces out the shape of an ellipse with semi–major and minor axis widths σx and
σy . A planar cross section of the two–dimensional function G at some angle θ from the
semi–major axis (passing through the centroid) will be a one–dimensional Gaussian,
2
− c2
2σc
g(c) = ae
(4.10)
where the width σc is equal to the length of the chord that passes through the 1–σ–contour
ellipse at the same angle θ from the x-axis given by
σc2 =
σx2 σy2
σx2 sin2 θ + σy2 cos2 θ
.
(4.11)
and c is the coordinate distance from the centroid position along the chord.
The rowcal blips are exactly such one–dimensional cross sections through the two– dimensional Gaussian main–lobe beam profile and the blip–widths will be a function of the beam
main–lobe semi–major and minor widths. Furthermore, rowcals are done at two distinct
deck–angles (θ) separated by 60 degrees (∆θ), providing two independent samples of the
semi–major and minor axes. In principle, this is enough information to solve for the ellipse
129
axes, but there is a catch: the beam ellipses are not aligned with respect to the array or
each other. We input the beam ellipse orientation angles φ from fits to the two–dimensional
beamrun maps, discussed in Section 4.3.2, assuming that these orientations are fixed by the
mirror deformations and unlikely to vary with temperature or focus (only flipping by 90◦ if
the major and minor axes reverse). The final equations that must be solved to determine
the ellipse axes are,
σx2 =
σ12 σ22 sin(∆θ) sin(2φ + θ1 + θ2 )
σ12 sin2 (φ + θ1 ) − σ22 sin2 (φ + θ2 )
(4.12)
σy2 =
σ12 σ22 sin(∆θ) sin(2φ + θ1 + θ2 )
σ12 cos2 (φ + θ1 ) − σ22 cos2 (φ + θ2 )
where σi are the blip-widths corresponding to the two rowcal deck-angles θi .
Two further complications are the extended emission around RCW38 and the possibility
that we are also resolving the source and these will have two distinct effects on the Gaussian
model fitting. The first will be to broaden the model width to account for the extended
emission in the Gaussian tails which is not associated with the compact source. If the
extended emission is not uniform and isotropic within a beam width, this may also distort
the shape of the main lobe, but does not appear to do so. Figure 4.4 shows the TOD
from a typical rowcal observation of RCW38 that has been fit with a Gaussian profile both
including all the data first and then excluding the extended emission “wings”; the difference
is a percent level systematic error.
There is reason to believe we are partially resolving the inner core of the source also.
Comparing two–dimensional Gaussian fits to both a) beamrun maps of RCW38 made by
co–adding the data across all the channels at a given frequency (see Figure 4.5) and b)
the equivalent maps for quasars coincidently observed in our CMB field, reveals that the
RCW38 main lobes are indeed broader. This means that the rowcal blip fits aren’t accurately
130
Figure 4.4: A sample RCW38 rowcal blip in the un–calibrated TOD. The blue represents all
of the data in this scan while only the green points remain after masking the wings. Both
sets of data are fit with Gaussian profiles (black and red lines respectively). Excluding wings
results in a significantly better fit to the main–lobe resulting in a narrower blip–width.
measuring the main lobe even when the extended emission is masked out. To account for
this broadening, I estimated a RCW38 “intrinsic–width correction factor” to apply to all
rowcal blip widths. The correction factor is calculated by making the gross assumption that
the underlying shape of RCW38 is also Gaussian and adds in quadrature to the beam width
when observed (true for the convolution of two Gaussians), i.e.
2
2 + σ2
σobs
= σsrc
beam
(4.13)
where σbeam is assumed to be measured by the CMB–field quasar, σobs is measured by
131
Figure 4.5: Maps of RCW38 made by using all of the beamrun data in the Post–Refocus
epoch and co–adding over all of the channels in a given frequency group. These very high
signal–to–noise maps clearly reveal the non–uniform extended emission in which the compact
core resides.
RCW38 beamrun map and σsrc is the intrinsic RCW38 Gaussian width. Thus the correction
factor is the quadrature difference between the RCW38 co–added beamrun width and the
CMB–field quasar width.
That all being said, the rowcal derived main-lobe parameters were only used for the 2005
season data, which was not used for the final results of this thesis. Furthermore, the CMB
field was adjusted in the 2006/07 seasons so that PKS0537-441 was no longer in the field.
4.3.2
From Beamruns
Figures 4.6 and 4.7 show the result of the more conventional beamrun observations — maps
of the beam calibration source for each detector in the array — for RCW38, Pre–Refocus,
and PKS0537-441, Post–Correction, respectively. Note that the individual panels in these
two figures reflect the angular offsets of each feed horn with respect to the pointing direction
and the axes of each panel are set to the same scale so that the apparent size of the source
132
versus the offsets of each horn are accurate. As discussed in Section 4.3.1 prior, the RCW38
beamruns were only used for determining the orientation angles of the beams, whereas the
rowcals were used to construct the full two–dimensional beam model.3 On the other hand,
PKS0537-441 beamruns were used exclusively for the Post–Correction beam model since
there was no evidence for significant temperature dependence (see Figure 4.2) or long term
drifts in the width (see Figure 4.3). Comparing the two beamrun figures further reinforces
the magnitude of the Pre–Refocus ellipticity and the vast improvement Post–Correction, but
should also make the difference in the flux of these two sources apparent. Pixel noise, due
to atmosphere and instrument, is clearly visible in the lower signal–to–noise PKS0537-441
observations.
The beamrun maps are constructed through binning TOD samples by their RA/DEC
coordinates in degrees on sky. This operation is represented by the usual mapping equation,
b = Ld,
(4.14)
where d is the TOD sample vector, b is the pixelized beamrun map pixel vector, and L is
the np × nt , inverse pointing matrix (i.e. binning matrix) that maps the TOD samples to
the beamrun map pixels, and where np and nt are the lengths of the beamrun–pixel and
TOD–sample vectors respectively. The TOD sample vector is given by
d = Ab0 + n
(4.15)
where n is the instrumental and atmospheric noise, b0 is the true beam convolved source
and A is the nt × np pointing matrix. This is the same mapping analysis presented in
Section 3.3.1, consult references cited therein.
3. This convoluted procedure was not used to model the beams for the final analysis in this thesis. It was
used exclusively for the 2005 season results presented in Ade et al. (2008).
133
Figure 4.6: Individual channel maps of RCW38 from a single beamrun observation in the
Pre-Refocus epoch for a single PSB per feed. Two–dimensional Gaussian profiles are fit to
the data and ellipses representing the measured widths are over–plotted in red along with a
solid line emphasizing the ellipse orientation. There is a striking degree of alignment (though
not perfect) between the channels in the Pre–Refocus epoch. The ellipticity of the beams
are also clearly evident. Missing panels correspond to channels not used in the analysis of
data from this particular epoch.
134
Figure 4.7: Individual channel maps of PKS0537-441 from a single beamrun observation in
the Post–Correction epoch for a single PSB per feed. Two–dimensional Gaussian profiles are
fit to the data and ellipses representing the measured widths are over–plotted in red along
with a solid line emphasizing the ellipse orientation. There is little only a weak alignment
evident between the channels in the Post–Correction epoch and the ellipticity is marginal.
135
Figure 4.8: Top panels show the point source masks normally used (left) and the one being
used for this analysis (right). The middle and bottom panels show the results of a 3rd order
polynomial filter applied to the data using the point source mask. The data above is for
one channel on 070720. Note that the aspect ratio for the bottom row of panels artificially
distorts the round beam into an ellipse.
136
The atmospheric component of the noise in eq. 4.15 can be a very significant component
of the total signal and is highly correlated over the length of a scan. Just as in the CMB
analysis, the atmospheric signal can be mostly filtered out by fitting a 3rd –order polynomial
to the raw TOD scan prior to mapping. TOD samples within 3–σ of the source centroid are
excluded from the polynomial fitting to prevent filtering modes associated with the source
itself. In order to pass all the channels in the array over the calibration source, the scans are
actually much longer than they need to be for a given pixel. So an additional mask is also
applied — excluding those samples in the scans that are far from the source — so that the
poly–filter removes atmospheric noise modes at smaller–scales near the source. An example
of the mask used is shown in Figure 4.8; the white regions are excluded from the TOD scans
when poly–filtering and the improvement in the atmospheric contamination is evident.
Finally the np × np pixel–pixel covariance can be calculated as
Np = hbbT i
= LhnnT iLT + hb0 bT
0i
(4.16)
Np = LNt LT + Nb
where Nt is the nt × nt TOD sample–sample covariance matrix and Nb is the covariance of
the beam itself, which within a given beamrun observation can safely be ignored.
Since the PKS0537-441 beamrun maps are relatively low signal–to–noise, the fitting error
on the the main–lobe Gaussian beam parameters may be significant and therefore it is
important to properly characterize Np for use in the fits. In the CMB analysis, this is
achieved through simulation — noise power spectra for CMB observations are estimated
from the TOD then simulations are generated with the same properties and mapped — and
the spread in the simulated maps provides the uncertainty. This is done in part because the
number of TOD samples is outrageously large so storing the pointing matrix is untenable
137
but also because the number of pixels in the map is also large np ∼ 105 and inverting Np
when fitting beam profiles is unreasonable. In the case of PKS0537-441 beamruns, the map
only needs to be a few beam widths across in order to characterize the main lobe; binning
into 0.02 degree square pixels means a map with np ∼ 102 is sufficient in size, which also
implies nt ∼ 104 . These are reasonably sized matrices to work with.
The sample–sample noise covariance matrix can be estimated from the data itself by
using a lag–covariance method. I define the covariance between the ith TOD sample and
another ∆i samples away as,
Cij = vS fcorr (∆i),
(4.17)
where vS is the variance of the samples dS in a single fixed–declination scan subset S of the
full TOD vector d, ∆i = i − j and fcorr is the correlation function defined as,
−1
fcorr (∆i) = vD
X di di+∆i
,
nD
(4.18)
i∈D
where di is the ith sample in the subset D of the full TOD vector d and is defined by


 d ,...,d
1
nt +∆i if ∆i < 0
dD =
,

 d1+∆i , . . . , dnt if ∆i ≥ 0
(4.19)
vD is the variance of dD and nD is the number of samples in D, which is less than the total
number of TOD samples nt since the shift ∆i moves samples out of range. The point is: I use
nearly all of the data D to estimate the noise correlations, but fix the overall normalization
(the variance) from per-scan data S, effectively down-weighting noisy scans.
The resulting correlation functions, averaged over all channels in a given frequency group
and over all beamruns, are plotted in Figure 4.9 as correlation versus sample offset (with
about ∼ 8 samples per arcmin or ∼ 10 samples per map pixel). The functions show two
138
distinct features: a small–scale peak related to the low–pass filter applied at the initial
data reduction stage and a few arcminute–scale ramp likely associated with residual 1/f
atmospheric noise. The value of the functions become increasingly noisy as the separation increases and the number of available samples decreases. The power spectra of these
correlation functions are plotted in Figure 4.10 to make the relationship between the correlation functions, the low–pass filter and the atmosphere clearer. As expected, atmospheric
correlations are stronger at 150 GHz.
The resulting pixel–pixel covariance matrix for a single channel map is shown in Figure
4.11. The map–pixel–space, row–row correlations have been forced to zero — as they are
nearly zero anyway — resulting in the block–diagonal form. Since the sample correlation
functions are symmetric with respect to ∆i, the resulting pixel covariance matrix blocks are
all Topelitz.
Two–dimensional Gaussian profiles are fit to the filtered beamrun maps using the pixel–
pixel covariance matrices to properly determine the uncertainty on the beam parameters, as
described in Section 4.1, equations 4.4 and 4.5. The results for a single PSB channel per
feed horn, derived from three of the total nine beamrun observations, are shown in Figure
4.12; these were the days with lowest atmospheric noise and are the only parameters that
make a significant contribution to an error–weighted mean. The scatter from day–to–day
in the width parameters is consistent with the errors on the fit parameters, indicating no
significant additional source of beam variation; though the widths of the channels within
a given frequency group do vary significantly. There is also good agreement between the
reduced χ2 distributions for the fits and the theoretical expectation (plotted inset in the
top panel of the figure) for that number of pixels and model parameters. Averaged over the
three days, the uncertainty on the main–lobe widths per–channel is less than 2%, averaged
over frequency group the uncertainties are insignificant. Post–Correction, the beams are also
approximately 10% elliptical at both frequencies but the ellipse orientation parameters are
139
Figure 4.9: The TOD sample–sample noise correlation functions, averaged over all quasar
beamruns, az–scans and channels at either 100 GHz (blue) or 150 GHz (green). The correlation functions exhibit two distinct features, a small scale feature due to low–pass filtering in
the initial data reduction and a large scale feature associated with atmospheric 1/f –noise,
more obvious at 150 GHz (as expected). The back transform of the fourier space low–pass
filter is plotted for comparison (black) and is more easily identified in the inset zoom–in
frame. The samples are spaced at 0.2 s, corresponding to ∼ 20 arcseconds at the QUaD scan
speed.
140
Figure 4.10: The auto power spectrum of the TOD sample–sample noise correlation functions, averaged over all quasar beamruns, az–scans and channels at either 100 GHz (blue) or
150 GHz (green). The power spectra exhibit two distinct features, a high–frequency feature
due to low–pass filtering in the initial data reduction and a low–frequency feature associated
with atmospheric 1/f –noise. The low–pass filter is also plotted for comparison (black).
unfortunately poorly determined, as expected for nearly circular beams. The parameters in
Figure 4.12 are used to model the main–lobe of the beams for the entire Post–Correction
epoch.4
4. In practice, a beam parameter set from a single beamrun is used in the final analysis. This is sufficient
because the fitting error is not the dominant source of beam uncertainty as discussed in Section 4.6.
141
Figure 4.11: An example beam map and pixel covariance matrix for one channel, on one day.
4.4
Near Sidelobes
In the preceding sections I describe methods to characterize the Gaussian main–lobe only.
Physical optics (PO) modeling using Zemax software (and others) predicts that the beams
should become non-Gaussian at the -20 dB level (Figure 4.13), i.e. at 10−2 times the main–
lobe peak power. Unfortunately, individual channel beamrun maps of PKS0537-441 do not
have the sensitivity to accurately measure the two–dimensional beam profile to this level.
Moreover, CMB signal is comparable to the expected sidelobe intensity from the quasar;
though this may be mitigated by carefully polynomial–filtering the TOD. So to properly
characterize the near–sidelobe structure it is necessary to calculate radial profiles, binning the
beamrun maps (or TOD directly) in concentric rings about the calibration source centroid.
As Figure 4.14 makes clear, even radial profiles are too noisy for individual channels, so
near–sidelobe models are constructed from per frequency co–adds.
The final beam model consists of two independent components: the main–lobe and the
sidelobe. The main–lobe is an elliptical Gaussian with parameters measured on a per–channel
142
Width [deg]
0.04
0.038
0.036
0.034
0.032
0.03
0.028
0.026
0.024
0.022
Theory !2
Data !2
1
2
% Error
8
6
4
Orientation [deg]
Ellipticity
2
1.3
1.2
1.1
70
50
30
10
−10
−30
−50
−70
0
5
10
15
20
25
30
35
Channel Index
40
45
50
55
60
Figure 4.12: The Post–Correction epoch beam parameters for the three lowest atmospheric
noise beamruns, with associated fitting errors derived using the pixel–pixel covariance matrices. The top panel shows the width parameters, both semi–major (dark grey) and minor
(light grey) axes, for only one PSB per feed horn in the array. Grey points represent parameters for individual beamruns whereas colored points and lines are averages over the
three days, semi–major in red and semi–minor in blue. The three per–day points for a given
channel are offset for clarity with the fourth being the mean appearing last. Inset in the top
panel are the χ2 distributions for all the parameter fits per day (grey) as compared to the
theoretical expectation (black). The second panel from the top shows the percent uncertainty
on the width parameters. The third panel down shows the ellipticity (simply the semi–major
to minor ratio). The final panel at the bottom displays the measured orientation angles.
143
Figure 4.13: Predicted one–dimensional cross–sections of far–field beam patterns for two
QUaD feed horns, a) the on–axis horn and b) an off–axis horn, using a variety of methods
(O’Sullivan et al., 2008). There is a robust prediction of Gaussian main–lobes but strongly
non-Gaussian structure below approximately -20dB. Figure from O’Sullivan et al. (2008)
basis from beamrun maps as discussed in Section 4.3.2. The side–lobe model comes from per
frequency co–added beamrun maps or TOD. A best–fit two–dimensional elliptical Gaussian
is subtracted from these maps revealing the residual sidelobe structure (Figure 4.15) and the
corresponding radial profiles are added back to the individual channel Gaussian main–lobes
to form the complete beam model.
One could instead assume the PO models (Figure 4.13) for the per channel beams, but
there is no good reason to believe that they predict the shape of the true sidelobes precisely.
Yet, this method has also been evaluated and shown to produce equivalent results (Brown
et al., 2009). In this case, the PO models are fed through simulations of the QSO beamruns and the resulting co-added maps are used to fit the sidelobe amplitude to match the
equivalent maps in Figure 4.15.
144
100 GHz
150 GHz
0
Normalized Intensity
10
−1
10
−2
10
−3
10
0
2
4
6
8
0
2
4
6
8
10
Normalized Intensity
0.02
0.01
0
−0.01
0
5
10
Radius [arcmin]
0
5
10
Radius [arcmin]
15
Figure 4.14: PKS0537-441 beamrun observation radial profiles calculated directly from TOD.
Prior to radial binning, two–dimensional Gaussian profiles are fit to the TOD to determine
the source centroid then the scans are 6th –order polynomial filtered to remove CMB. The
resulting profiles broadly exhibit the structure predicted by physical optics modeling (see
text and compare with Figure 4.13). Radial profiles are shown for individual channel data
(grey) and the mean over all channels in the frequency group (solid orange). Error bars are
simply estimated from the scatter of pixels in a radial bin; though they are not shown for
the individual channels the error on the mean is derived from them.
145
Figure 4.15: The residual after subtracting a best–fit Gaussian profile from a co–added
PKS0537-441 beamrun map (top) and resulting radial profiles, which are added to the Gaussian main–lobes in the final beam model (bottom). Figure courtesy of C. Pryke
4.5
Array Parameters
In addition to measuring the individual channel beam main-lobe parameters, the full beamrun observations are also used to measure the pointing direction of each channel with respect
to the telescope pointing axis, i.e. the relative channel centroid offsets. This follows relatively straightforwardly from the discussion in Section 4.3.2 — since the beamrun is a full
146
two-dimensional raster map — and doesn’t require more elaboration here. Since the beamruns are completed infrequently, they are not sufficient for studying the short-term stability
of the array. The rowcals can be used in place of beamruns, just as they are for short-term
beam fluctuation, by considering the two distinct deck-angle scans. Though in this case,
there are only two parameters being measured so no outside information is required.
Working in the flat-sky approximation, the RA degrees-on-sky (xdos ) and DEC (y) offsets
of a given channel from the telescope pointing direction (x0 , y0 ), in an observation at a deckangle θ can be related to the radial distance from the pointing axis r and angle about the
pointing axis φ via the expressions,
xdos
= xi cos(yi + y0 ) = r sin(φ − θi )
i
,
yi = r cos(φ − θi )
where i labels the observation. Combining the two expressions yields an equation relating
the one-dimensional RA offset to the two-dimensional array parameters r and φ,
xi cos(r cos(φ − θi ) + y0 ) = r sin(φ − θi ).
(4.20)
The RA offset is determined by the Gaussian centroid positions from the rowcal blip-fits and
this transcendental equation for the two unknowns can be solved with measurements at two
distinct deck-angles. Since only the centroids are being used here, the structure of RCW38
should have no effect.5
The result of calculating these RA/DEC offsets for ∼ 200 days in the Post-Correction
epoch are shown in Figure 4.16. There is a substantial range in the offsets from day to
day with an rms of ∼ 0.25 arcmin, only a small component of which is due to the precision
5. Not quite true if the diffuse structure is strong and asymmetric, in which case it might bias the centroid
depending on the direction of the rowcal scan. In practice this is not important for RCW38 (see Figure 4.4).
147
RA/DEC channel offsets, std = 0.27 arcmin
0.8
0.6
Dec offset (deg)
0.4
0.2
0
−0.2
−0.4
−0.6
−0.8
0.8
0.6
0.4
0.2
0
−0.2
RA offset deg on sky (deg)
−0.4
−0.6
−0.8
Figure 4.16: The pointing offsets (array positions) of all the channels (A in blue, B in orange)
as measured from rowcal blip-fit centroids for ∼ 200 days in the Post-Correction epoch. The
daily measurements are plotted as crosses and the mean of all days as a black point. The
∼ 0.25 arcmin rms about the mean offset values of is mostly due to thermal fluctuations in
the foam cone. Missing data corresponds to channels not used in the analysis of data from
this particular epoch.
148
Array Ring Fit Output Parameters
% radial deviation
0.5
0
−0.5
0.30 deg
0.53 deg
0.61 deg
0.80 deg
RA/Dec offset [arcmin]
1
0
−1
RA
DEC
dk offset [deg]
1.1
0.9
0.7
Apr06 May06 Jun06
Jul06 Aug06 Sep06 Oct06 Nov06 Dec06 Jan07 Feb07 Mar07 Apr07 May07 Jun07
Jul07 Aug07
Figure 4.17: The ring-fit parameters (see text) measured from the rowcal derived offsets
shown in Figure 4.16 for ∼ 200 days in the Post-Correction epoch. The correlated fluctuations in the ring radii (top) and the temperature dependence demonstrated in Figure 4.18
make the case that they are due to thermal fluctuations in the cone. The pointing offsets
(middle panel) are consistent with those derived from the pointing crosses discussed in Section 3.1, as are the fluctuations in the rotation angle (bottom), which corresponds to the
deck-angle.
of the blip-fit parameters or associated with thermal fluctuations in the cone; most of it is
likely due to pointing drift. If thermal fluctuation is the main cause, then one might expect
the fluctuation to be primarily radial since as the focus fluctuates, the beams will move in
and out from the pointing axis. An interesting way to account for this is to exploit the
arrangement of the feed horns. The array is designed so that horns lie in one of four radial
149
Array Ring Fit Output Parameters
% radial deviation
0.5
0
−0.5
0.30 deg
0.53 deg
0.61 deg
0.80 deg
RA/Dec offset [arcmin]
1
0
−1
RA
DEC
dk offset [deg]
1.1
0.9
0.7
−16
−12
−8
−4
0
4
8
12
16
Temperature [C]
Figure 4.18: The ring-fit parameters (see text) measured from the rowcal derived offsets
shown in Figure 4.16 for ∼ 200 days in the Post-Correction epoch versus ambient temperature. There is a slight temperature dependence in the ring-radii parameters (top); best-fit
first-order polynomials are over-plotted . The other parameters do not exhibit any significant
temperature dependence.
rings about the pointing axis. One can group the channels in a given ring together and fit for
that ring’s radius R and net rotation angle around the pointing axis Θ (now related to the
deck-angle) for every day. All the rings should also share a common center point (X0 , Y0 )
that may or may not align with the pointing axis depending on the accuracy of the pointing.
The ring-fit parameter results are shown in Figure 4.17 for the same ∼ 200 days and
their temperature dependence is plotted in Figure 4.18. The fluctuations in the radii are
150
150−09−A 15.34 1.27 150−08−A 10.13 0.27
7.5
7.5
0
0
−7.5
−7.5
100−03−A 4.73 0.66
7.5
0
0
−7.5
7.5
7.5
0
0
0
−7.5
−7.5
−7.5
7.5
0
150−12−A 9.88 0.76
7.5
0
7.5
−7.5
−7.5
150−13−A 6.52 0.60
7.5
0
−7.5
100−07−A 8.42 0.85
7.5
0
−7.5
150−07−A 2.69 0.45
0
−7.5
100−11−A 5.18 0.53
7.5
0
0
−7.5
150−06−A 2.78 0.82
−7.5
100−10−A 3.54 0.57
150−17−A 4.89 0.61
7.5
7.5
0
0
0
−7.5
−7.5
−7.5
100−08−A NaN NaN
7.5
100−09−A NaN NaN
150−16−A 7.82 0.62
7.5
7.5
7.5
0
0
0
−7.5
−7.5
−7.5
150−14−A 9.24 0.51
7.5
−7.5
7.5
0
−7.5
0
−7.5
150−01−A 3.82 1.56
150−05−A 1.65 0.47
150−18−A 9.08 0.46
7.5
0
0
0
100−12−A 5.29 0.57
7.5
−7.5
−7.5
100−06−A 4.86 0.58
−7.5
150−02−A 2.78 0.75
7.5
−7.5
0
0
0
−7.5
7.5
−7.5
7.5
150−04−A 3.97 0.62
150−19−A 11.05 0.50
0
150−03−A 4.99 0.48
7.5
7.5
7.5
−7.5
100−04−A 3.94 0.83
100−05−A 5.02 0.87
100−01−A 10.43 0.98
0
−7.5
150−11−A 7.93 0.78
7.5
100−02−A 4.14 0.72
7.5
150−15−A 7.59 0.82
7.5
0
Dec
150−10−A 13.44 1.07
7.5
−7.5
RA
A/B Channel (blue/orange) Separations (arcsec)
Figure 4.19: Residual pointing offsets for all the channels (A in blue, B in orange) as measured
by rowcal blip-fit centroids for ∼ 200 days in the Post-Correction epoch after subtracting
correlated ring-fit fluctuations; axes are in arcseconds. The daily measurements are plotted
as crosses and the mean of all days as a black point. The mean pointing offset of the PSB
pairs has been subtracted from each pair for plotting purposes. Missing data corresponds to
channels not used in the analysis of data from this particular epoch.
151
sub-percent level, which even for the largest ring is much less than half an arcminute, but
are highly correlated and exhibit a moderate temperature dependence. These are evidently
fluctuations due to the thermal expansion of the foam cone. The pointing offset also fluctuates at the sub-arcminute level and these results are consistent with the fluctuation observed
in the pointing crosses discussed in Section 3.1. The fluctuations in the net rotation angle,
which is also assumed to be common between rings, is roughly the same magnitude as the
pointing (for a given channel offset from the pointing axis) though there is a small systematic shift between seasons. These small fluctuations will tend to broaden the beam slightly
in a final full co-add CMB map but shouldn’t significantly effect the beam in a single-day
beamrun map.
The ring-fit parameters can be subtracted from the raw rowcal derived offsets, which
leaves the residuals due to fitting accuracy in the rowcal blip-fits; Figure 4.19 shows the
results, the rms of the offsets is reduced to ∼ 5 arcseconds (a factor of 4) and is consistent
with the blip-fit parameter uncertainty alone. Doing this brings the AB pointing offset
mismatch into stark relief, though it is identifiable with lower significance in the beamrun
data. There is a few-arcsecond-level difference in the pointing directions of the two PSB
channels in a given feed horn (referred to as the A and B channels). The distribution of the
offsets is not random: the magnitude is clearly dependent on the distance of the horn from
the pointing center and the orientation of the offset is also both dependent on the direction of
the polarization-sensitivity of the PSB pair and its angular position about the pointing axis.
Though the pattern suggests birefringence in the lenses, no definitive explanation for this
effect has been determined. This is an irrelevant effect for the QUaD polarization results:
the AB-offsets are be accounted for in the simulations and lead to negligible leakage from
intensity to polarization signal when the data from the two channels is differenced.
152
4.6
Beam Uncertainty
Having provided a broad overview of all the major components of the beam model that is used
in simulations I will now address the impact of the model uncertainty on the final bandpowers.
During the first season analysis (Ade et al., 2008), the beam uncertainty was attributed to the
extreme fluctuations, which could only be partly correlated with the ambient temperature
alone, and to the uncertain RCW38 intrinsic width factor. Thus a very conservative 30%
width uncertainty was adopted on a Gaussian beam with no sidelobes accounted for in the
model. The improved sensitivity of the data from two new seasons of observations, more
stringent controls on systematic contamination and improved analysis methods necessitate a
more precise beam model in the Post–Correction epoch. Accordingly, the preceding Sections
detailed many of these improvements in the beam model.
There are several potentially significant sources of uncertainty on the Post–Correction
epoch beam model:
1. Fitting uncertainty in the main–lobe width parameters
2. Uncertainty in the empirical near sidelobe model amplitude
3. Day–to–day variation in the beam width
4. Day–to–day variation in the array–ring radii
5. Day–to–day pointing wander
The first component is relatively easy to account for, is well understood and has been shown
√
to be very small (Figure 4.12), less than ∼ 2% per channel, and therefore ∼ 2%/ 30 ≈ 0.35%
coadded over all channels of a given frequency. This is easily translated into an uncertainty
on the bandpowers via eq. 4.3.
153
The uncertainty on the non–Gaussian near sidelobe is a little more difficult to translate
into an uncertainty on the bandpowers as the errors on the empirical sidelobe radial profile
points in Figure 4.15 allow for different sidelobe shapes. Furthermore, though the real space
uncertainty is small in magnitude it can have a more dramatic effect in Fourier space just as
the Gaussian width does. To deal with the uncertainty on the sidelobes, consider the sidelobe
transfer function Bl2 , and the upper/lower limits possible given the real space uncertainty
δB(θ),
2
W` = FT B(θ)
2
W`,min = FT B(θ) + δB(θ)
2
W`,max = FT B(θ) − δB(θ) ,
The fractional uncertainty on the transfer function is then given by,
S` =
W`,max − W`,min
W`
.
The resulting uncertainty on the bandpowers S` for the main–lobe and sidelobes is plotted in
Figure 4.20, though in this figure the assumed uncertainty on the main–lobe width is larger
for reasons elucidated in the following discussion.6
Uncertainty in the day–to–day variation of the beam widths implies that the width
measured on any given day via the beamrun maps is potentially larger or smaller than the
true average width. There is no good explanation for why this happens but the fluctuation
is still likely to be driven by thermal expansion in the foam cone, though it doesn’t seem to
correlate in a simple linear way with the instantaneous ambient external temperature.7 The
rowcal blip–widths shown in Figure 4.3 are the best handle on this fluctuation and imply an
6. This figure plots the transfer function uncertainty as S` = ∆B`2 /B`2 , where W` = B`2
7. Otherwise it would manifest as a temperature dependance in Figure 4.2. The exceedingly small arcsecond array parameter temperature dependance evident in Figure 4.18 further attests to this paradox.
154
Figure 4.20: The systematic beam uncertainty on the final bandpowers. The main–lobe
component assumes a 2% uncertainty on the Gaussian width (see text) and the non–Gaussian
sidelobe uncertainty comes from the difference in beam suppression when assuming the 3–
σ upper/lower limits of the empirical sidelobe model. The two components of the beam
uncertainty are added in quadrature. Figure from Brown et al. (2009)
rms of ∼ 2%. This uncertainty is really exclusive to the main–lobe since the near sidelobes
are relatively flat where they are detected and shouldn’t be affected by this fluctuation.
Since the day–to–day rms fluctuations are correlated between channels, this uncertainty is
substantially more important than the insignificant fitting errors considered in the preceding
discussion.
The last two effects in the list on page 153 are related in their impact on the beam. The
155
day–to–day pointing fluctuation will tend to broaden out the beams in the final co–added
maps since two observations of the same patch of sky on different days will be ever so slightly
offset. The pointing wander has been identified in various ways and is well understood, so
it is included in simulations and produces the expected level of broadening (as measured by
comparing the QSO in CMB maps and beamrun data). The day–to–day variation in the
radial pointing offsets will have effectively the same impact as the pointing wander. Though
this pointing wander is only in the radial direction, when added over all the channels it will
average out into a circularly symmetric pattern. Figure 4.17 indicates that the rms in the
radial offsets is ∼ 0.2%, which for the outermost 150 GHz channels translates to roughly
one tenth of an arcminute, which is paltry as compared with the already simulated pointing
fluctuations.
Therefore the only significant sources of beam uncertainty are a ∼ 2% uncertainty on the
width of the main–lobe from the correlated day–to–day fluctuations and the near sidelobe
model uncertainty; these are both reflected in Figure 4.20.
4.7
Far Sidelobes
I have saved the discussion of the far sidelobes for last because they are not included in the
simulated beam model as they extend well beyond the area of the CMB field at anywhere from
∼ 10◦ to 100◦ away from the pointing axis. The distribution of known QUaD far sidelobes
is illustrated in Figure 4.21. Unlike the main-lobe or near sidelobes, these sidelobes are
not associated with the primary optical chain, i.e. mirrors, lenses, or feed horns and their
associated power is very low, far below the -20dB near sidelobes and they are diffuse and
extended. Yet their angular separation poses the risk of contaminating the data with pickup
from the ground or very bright astronomical sources not in the target field.
The inner circular sidelobe is caused by the reflection of rays leaving the secondary mirror
156
Inner Circular Sidelobe
(from reflecting collar)
+/- 7 to 12 degrees
center pixel
edge pixel
Radial Sidelobe
(from diffracting edge)
edge pixel only:
-15 to -55 deg.
Diffuse Sidelobe
(from cone + primary)
center and edge pixels
Outer Circular Sidelobe
(reflection from cone)
center pixel: -100 deg.
edge pixel: -99 deg.
Outer Circular Sidelobe
(reflection from cone)
center pixel: +100 deg.
edge pixel: +101 deg.
Figure 4.21: This figure diagrams all the known QUaD far-sidelobes. These sidelobes are
generically due to diffraction or reflection from optical components.
Figure from Hinderks et al. (2009)
back off of the crystotat baffle (discussed in Section 2.1.4). The radial sidelobe is caused by
diffraction at the termination of the beam inside the cryostat, but it only affects the outer
150 GHz off-axis channels. These sidelobes were studied with special large-angle raster scan
observations of a chopped Gunn source mounted to the top edge of the ground-shield; Figure
4.22 shows resulting maps for two channels, which illustrate that these weak sidelobes appear
at ¡-30dB. The original parabolic baffle from the first season was replaced with a flat one
that moved the circular sidelobes from ∼ 20◦ closer to 10◦ in the Post-Correction epoch to
reduce the chance of intersecting a contaminating source.
The remaining sidelobes are a consequence of ∼ 2% reflection off of the foam cone. Rays
leaving the the primary mirror reflect back off the foam cone leading to the outer circular
157
-35 dB
-40 dB
30
-45 dB
20
-50 dB
10
-55 dB
0
-40
Elevation (degrees)
Elevation (degrees)
-35 dB
-60 dB
-20
0
Azimuth (degrees)
20
40
-40 dB
30
-45 dB
20
-50 dB
10
-55 dB
0
-40
-60 dB
-20
0
Azimuth (degrees)
20
40
Figure 4.22: Special observations of a chopped Gunn source mounted to the top edge of
the ground-shield at an elevation of ∼ 21◦ ; maps for on-axis (top) and off-axis (bottom)
150 GHz channels are shown. The color-scale shows the intensity relative to the main-lobe
peak. These sidelobes are the suspected origin of the ground contamination signal. Figure
from Hinderks et al. (2009)
sidelobe, and the re-reflection of some rays by the primary mirror again may result in an
undetected diffuse sidelobe. The large ∼ 100◦ angular radius of the outer circular sidelobe
makes it a particularly dangerous source of potential contamination, but the large angular
area (and low power) of this sidelobe requires an exceptionally bright astronomical source
to have an impact in the data. Figure 4.23 shows maps of data that has been contaminated
by the Sun and the Moon (on separate occasions) compared with the expectation from a
model of the outer circular sidelobe. The model proves to be in excellent agreement with the
contamination observed allowing for only one free parameter, the opening angle of the cone,
which can vary with thermal expansion or other flexure. It was used to conservatively flag
data that might be contaminated by the Moon (the Sun is below the horizon during CMB
observations).
158
Sun (sim)
4.8
4.8
4.7
4.7
El offset (deg)
El offset (deg)
Sun (data)
4.6
4.5
4.4
4.6
4.5
4.4
4.3
4.3
4.2
4.2
2
0
−2
Az offset (deg)
2
Moon (sim)
1.1
1.1
1
1
El offset (deg)
El offset (deg)
Moon (data)
0.9
0.8
0.7
0
−2
Az offset (deg)
0.6
0.9
0.8
0.7
0.6
0.5
0.5
2
0
−2
Az offset (deg)
2
0
−2
Az offset (deg)
Figure 4.23: Real single PSB pair maps (left) for data contaminated by the sun (top) and
moon (bottom) compared to simulated maps (right) that incorporate a circular sidelobe model
based on a best-fit cone opening angle, fixed sidelobe width and the true positions of the
sun and moon at the time of observation. The rate of sun/moon motion on the sky result
in different stripe slopes. Figure from Hinderks et al. (2009)
159
CHAPTER 5
THE HIGH-` TT POWER SPECTRUM
The preceding chapters described the QUaD telescope, the process of measuring the sky
intensity and mapmaking. In this chapter I complete the analysis by detailing how the temperature anisotropy power spectrum is estimated for all scales at which QUaD has adequate
sensitivity. In Section 5.1 I discuss how the MASTER methods are implemented for the
QUaD bandpower estimation and I present the basic bandpower results for 2000 < ` < 3000
in Section 5.2 . Here again, I omit any discussion about calculating the (E and B mode)
polarization spectra and cross spectra as they are irrelevant to the subject of this thesis.
5.1
Processing the Spectra
In the following section I detail the final stage of the analysis, the procedure required to
estimate the power spectrum from the final maps and simulations. The method employed
here broadly follows the MASTER analysis technique (Hivon et al., 2002) and makes use of
simulations to correct the raw spectra for noise bias and filtering and estimates the bandpower
covariance from the simulations. For the QUaD field size, it is adequate to invoke the flatsky approximation, i.e. the spherical harmonic decomposition reduces to a two-dimensional
Fourier transform.
The spectrum estimation proceeds roughly as follows. First sources are masked and the
maps are appodized. They are then Fourier transformed and the square of the modes are
weighted by their expected signal to noise and averaged in bins to derive bandpowers. The
same is done for all simulations. The mean noise only bandpowers from simulations are
subtracted from the real bandpowers. Finally, the filtering is corrected for by comparing the
signal only simulations against the input model using the bandpower window functions. The
160
same is done for all signal plus noise simulations and their fluctuation is used to estimate
the uncertainty.
5.1.1
Appodization and Source Masking
Prior to Fourier transforming the maps it is first necessary to appodize them to reduce ringing
and suppress noisy data. A multiplication in image space is a convolution in the Fourier
domain, therefore the step imposed on the CMB sky by the sharp edge of observation field
will result in Fourier mode mixing by convolution; the sharper the step the more pronounced
and extended the ringing. The appodization mask is chosen to be a smooth image space
window function which is flat in the center of the map and tapers to zero at the edges
to roll-off the sky cut. For QUaD, a natural choice is the inverse of the variance map v
calculated during the mapmaking in Section 3.3 and shown in Figure 3.9. It is a nice feature
of the QUaD detector array that the coverage falls off smoothly at the map edges, thereby
smoothly increasing the variance.
Though the variance map smoothly rises to large values at the edge where the field
coverage is shallow, it is not quite smooth enough. Regions near the map boundary have
only been observed by a small subset of detectors. There the variance map is patchy with
steplike features at the edges of each detector’s observed field. This sharp fluctuation will lead
to “up-mixing” of low multipole power to high when the appodization mask is applied, i.e.
the “tile-edge effects” mitigated against by the scan-end deweighting. The simple solution
in this case is to smooth the mask by a 0.5 degree FWHM Gaussian.
The bright sources detected in Section 3.4 must also be masked out of the maps. CMB
power is exponentially damped at higher multipoles but point source power is flat and will
completely dominate the power at high multipoles. Furthermore, the variation of signal
from bright sources from field to field is non-Gaussian. Whereas the source power is flat, the
161
scarce bright sources are compact in image space, so accounting for them there is easy to
do without degrading the sensitivity to the diffuse CMB. To mask the sources a 0.5 degree
FWHM Gaussian divot is injected into the appodization mask at the location of each of the
seven detected at > 5-σ (listed in table 3.1). The same is done for each simulation using it’s
particular source catalog (see Section 3.4).
The final appodization mask is then given by
7
i
h
X
−1
1−
G(xi , yi )
a = G(0, 0) ∗ v
(5.1)
i=1
where G is a unit peak-height, two-dimensional Gaussian profile and (xi , yi ) are the > 5-σ
source positions in the map.
5.1.2
Fourier Plane Weighting/Masking
Fourier modes for the CMB signal are expected to be isotropic, therefore arbitrarily weighting
modes in the Fourier plane will not bias bandpowers so long as the weights are independent
of the actual data values. Figure 5.1 shows the mean noise and signal only simulation 2daps
(see Section 3.4); the distribution of the noise is strongly correlated and non-uniform. The
dark bands in the figure correspond to the polynomial filtering, which removes atmospheric
1/f -noise in the scan-direction (x-axis) yet it is clear that at higher multipoles along the
orthogonal (y−-axis) this noise component persists due to striping in the map. Furthermore,
correlations between detectors — also likely due to common atmospheric signal — are evident
as blotches of high frequency noise evenly spaced throughout the plane. Clearly, downweighting these modes will substantially suppress the noise power in the bandpowers at
higher multipoles where modes are abundant.
162
100 GHz: 2!v
2000
0
−2000
150 GHz: 2!v
2000
0
−2000
−2000
0
2!u
2000
−2000
0
2!u
2000
240
0.75
160
0.5
80
0.25
240
0.75
160
0.5
80
0.25
−2000
0
2!u
2000
Figure 5.1: Fourier plane weights are constructed from the mean of noise only (left column)
and signal only (center column) simulation two-dimensional auto power spectra (100 GHz
top and 150 GHz bottom row). The Fourier plane weights (right right) have been normalized
by the 90th percentile in radial bins, corresponding to the actual bandpower bins for plotting
purposes only and hence the abrupt cut-off at ` ≈ 3000. The blacked-out wedges to the right
and left along the x-axis are intentionally excised regions where up-mixing may be a concern
(see text).
Fourier plane weights are calculated as
W=
S cmb
S cmb + N
!2
,
(5.2)
where S cmb and N are the signal and noise 2daps and the resulting weights are shown in
Figure 5.1. Fourier plane weighting has a dramatic effect on the bandpower uncertainty at
high-` — for 150 GHz the fluctuation is suppressed by as much as an order of magnitude in
the range of 2500 < ` < 3000.
In addition to the weighting, masking in the Fourier plane can also be employed under
163
the same assumptions. Though the up-mixing due to the tile-edge effects in the maps and
appodization mask are almost entirely suppressed by scan-end deweighting in the mapmaking and smoothing the variance map, some up-mix may still be possible. So regions where
this up-mix is known to be apparent are also excised from the analysis, hence the blacked
out wedges in Figure 5.1. To test the level of up-mix in the final bandpowers, signal only
simulations with input CMB power at ` < 1000 only were carried through to final bandpowers. This constrained the effect to be only a 1% problem at even the highest multipoles of
this analysis.
5.1.3
Bandpowers
To calculate the raw bandpowers, the appodized map is first Fourier transformed,
m̃ = c · FT (a · m),
(5.3)
where c is a normalization that preserves power under the appodization and transform. The
square of the modes are then weighted and averaged into multipole bins,
E
D `(` + 1)
2
e
W |m̃|
,
Cb =
2π
b
where the multipole is related to the Fourier conjugate variable u = 1/θ by ` = 2π
(5.4)
u2 + v 2 .
p
The bin width is chosen to be ∆` = 81 for no particularly good reason and the spectra are
“flattened” within a bin by the `2 prefactor. Figure 5.2 illustrates the quantity inside the
brackets of equation 5.4 with the corresponding multipole bins for ` > 2000. Simulation
bandpowers are calculated in exactly the same way.
This is the equivalent of
Ceb =
X
e
Pbl C
`
l
164
(5.5)
100 GHz
150 GHz
2000
2!v
400
0
200
−2000
2000
2!v
400
0
200
−2000
−2000
0
2!u
2000
−2000
0
2!u
2000
Figure 5.2: The two dimensional auto power spectra for the real maps (top). Some of the
features present in the simulation 2daps in Figure 5.1 can be identified here, though many
aren’t significant enough to appear in what is effectively a single realization. The effect of
the signal to noise weighting and mode excise should be evident (bottom), the noisy modes
at large multipoles along the x-axis are entirely suppressed. The ` > 2000 bins used in
bandpower estimation are over-plotted (red).
165
e is the “azimuthally averaged” weighted map-2daps, i.e.
where C
`
e =
C
`
`
X
1
Wlm |m̃lm |2 .
2` + 1
(5.6)
m=−`
and Pbl is a binning operator defined as
Pbl =

1 `(`+1)

 2π
b+1
b
for 2 ≤ `blow ≤ ` < `b+1
low ≤ 3000

 0
otherwise
`low −`low
,
(5.7)
e are the measured pseudo-C , convolved by the
In the parlance of the MASTER analysis, C
`
`
the sky cut (i.e. the appodization). Note that here W is the Fourier plane weight and not
the sky window.
The resulting bandpowers for the real maps, signal plus noise simulations and signal plus
noise plus point source simulations are shown in Figures 5.3 and 5.4 with > 5-σ source masking turned off and on respectively. Masking even the brightest sources in the QUaD maps
makes little difference for ` < 1000 but is absolutely critical for ` > 2000 and the distribution of un-masked bandpowers is highly non-Gaussian at increasing multipoles. Once these
brightest sources are masked, the signal plus noise plus point source simulation distribution
is nearly indistinguishable from the signal plus noise only, implying that the residual source
power after masking is small.
In some sense, if the goal were only to answer the question of whether the QUaD data
was consistent with the simulation input models (i.e. the CMB alone or with a radio source
foreground) the analysis could just stop here. If it were found to be inconsistent, then the
signal simulations would be rerun for a new input model and compared again. This is a
terribly inefficient way to operate, so instead the spectra are corrected for systematic effects
associated with the experiment to render them more useful for comparison to an arbitrary
166
4
10
3
10
l
150 GHz : l(l+1)/2! C [µK2]
Real Estimated Bandpowers
Mean of Noise Only Sims
Mean of Signal Only Sims
Signal + Noise Simulations
Signal + Noise + Pntsrc Simulations
2
10
1
3
10
l
100 GHz : l(l+1)C /2! [µK2]
10
2
10
1
10
500
1000
1500
Multipole Moment l
2000
2500
3000
Figure 5.3: Bandpowers calculated from real (black), simulated signal plus noise (grey) and
simulated signal plus noise plus point source (light green) maps before masking > 5σ sources.
The highly non-Gaussian bright source component of the distribution is evident in the relatively rare but highly deviant spectra. The mean of the noise only simulations (blue) will be
subtracted from both the raw real bandpowers and the signal plus noise simulations. The
mean of the signal only simulations (red) is compared to the ΛCDM expectation values to
derive the filter/beam correction (see text).
167
4
10
3
10
l
150 GHz : l(l+1)/2! C [µK2]
Real Estimated Bandpowers
Mean of Noise Only Sims
Mean of Signal Only Sims
Signal + Noise Simulations
Signal + Noise + Pntsrc Simulations
2
10
1
3
10
l
100 GHz : l(l+1)C /2! [µK2]
10
2
10
1
10
500
1000
1500
Multipole Moment l
2000
2500
3000
Figure 5.4: Same as Figure 5.3 except the bandpowers are calculated after masking > 5σ
sources. The highly non-Gaussian bright source component of the distribution — evident
in Figure 5.3 — is suppressed by the source masking rendering the signal plus noise plus
point source bandpower distribution nearly indistinguishable from the signal plus noise only
distribution.
168
model.
5.1.4
Noise Subtraction
The next step in the process is to remove the noise bias by subtracting the mean noise
simulation bandpowers from the real data and signal plus noise simulations, i.e. the blue
curve in Figure 5.4 from the grey lines and black points. The mean of the signal plus noise
simulations after subtracting the noise mean is equivalent to the mean of the signal only
simulations, though their fluctuation is not. Though noise bias free, the spectra are still
supressed by the beams, the ground template subtraction and the polynomial 1/f filtering.
To correct for this suppression one need only compare the input and output signal spectra.
Yet the appodization induced mode mixing makes it incorrect to compare them directly —
so first a slight detour.
5.1.5
Bandpower Window Functions
Looking at the two-dimensional auto power spectra plotted in Figure 5.2 it should be clear
that adjacent modes in the Fourier plane are highly correlated. This mode-mode correlation,
or mode mixing, is predominantly related to the image space sky window (i.e. the appodization mask); a multiplication in the image space is a convolution in Fourier space. Even
the 2daps from a perfect experiment with no noise or contamination and infinite resolution
would still be subject to this mode mixing from the window. Effectively this means that
the bands used for binning the spectra are correlated with adjacent bands through what are
commonly called the bandpower window functions. This effect must be characterized before
any theoretical spectra can be compared against the QUaD bandpowers.
For this analysis, the bandpower window functions are calculated as follows:
1. Convolve the annulus of a narrow band with the Fourier transform of the mask, i.e.
169
−3
15
x 10
Bandpower Window Function: b=1500
150 GHz
10
5
0
−3
x 10
100 GHz
10
5
0
0
500
1000
1500
Multipole Moment l
2000
2500
3000
Figure 5.5: The bandpower window functions for all bands from 200 < ` < 3000. Though
they appear indistinguishable to the eye, the functions at 100 GHz are in fact slightly broader
than those at 150 GHz. A single bandpower for b = 1500 is highlighted for clarity.
Pb = Pb ∗ ã, where Pb is one within the band b and zero outside it (akin to the binning
operator Pb` ).
2. Calculate the power spectra of the convolved bands and make them the rows of a
matrix. The columns of this matrix are the bandpower window functions. i.e., βbb0 =
h|Pb |2 ib0
170
3. Interpolate βbb0 to every multipole to make it βb` .1
The resulting bandpower window functions are plotted in Figure 5.5; the overlap of the
functions from band-to-band implies a ∼ 20% bandpower correlation. The expectation values
of any input spectrum are calculated as
Eb = βb` C`th .
(5.8)
and are directly comparable to the bandpowers calculated with equation 5.4.
5.1.6
Filter/Beam Correction
The filter/beam correction factor is calculated as the ratio between the mean signal only
simulation bandpowers and the input spectrum expectation values
Fb =
hSbcmb iMC
Eb
(5.9)
where hSbcmb iMC is the mean of the signal only simulations, the red line in Figure 5.4.
The filter/beam correction factor (transfer function) incorporates all the filtering, even
pixelization effects. The resulting function is plotted in Figure 5.6. At the lowest multipoles it
reflects the polynomial filter and ground removal and at the highest multipoles the telescope
beam Gaussian main lobe; at intermediate multipoles it is dominated by the beam near
sidelobes (see Section 4.4).
1. The accuracy of the last step can also be improved by boosting the resolution of P and a.
171
Gaussian: !=0.027 deg
Filter/Beam Transfer Function
150 GHz: B2l
1
0.5
0
Gaussian: !=0.037 deg
Filter/Beam Transfer Function
100 GHz: B2l
1
0.5
0
0
500
1000
1500
Multipole (l)
2000
2500
3000
Figure 5.6: The filter/beam transfer function calculated by comparing the signal only simulations to the input ΛCDM model expectation (black). This function characterizes the suppression of sky power from the dual effects of beam convolution (dominant at high multipoles)
and polynomial filtering and ground removal in the timestream data prior to mapmaking
(dominant at low multipoles). A transfer function corresponding to a Gaussian beam alone
is over-plotted for comparison assuming widths from the measured telescope main-lobes.
5.1.7
Final Bandpowers and the Covariance Matrix
The final estimated bandpowers are calculated as,
−1
b
e
Cb = Fb
Cb − hNb iMC
(5.10)
for both the real and simulated signal plus noise plus point sources spectra. In terms of
psuedo-C` the equivalent expression would be,
Cbb = Fb−1
X
e
e
Pb` C` − hN` iMC .
`
172
(5.11)
The uncertainty on the bandpower values is determined from the fluctuations in the simulation; the covariance matrix is given by
Cbb0 =
D
Cbb − hCbb iMC
Cbb0 − hCbb0 iMC
E
MC
(5.12)
and the uncertainty is calculated from the diagonal elements as
∆Cbb =
p
Cbb .
(5.13)
The bandpowers and their uncertainty for a single simulation realization from 200 < ` < 3000
are plotted in Figure 5.7 along with spectra from all the jackknives, which were estimated
in precisely the same way.
It is important to note that the uncertainty on the bandpowers calculated in equation 5.13
is only valid for the input model used; a different model will changed the signal and corresponding sample variance, which enters in the covariance matrix calculation in equation 5.12.
To see this more clearly, consider that the bandpowers are χ2 distributed (the sum
of the squares of Gaussian random numbers) with ν = (2` + 1) degrees of freedom per
multipole for a full sky experiment.2 The distribution of bandpower deviations normalized
by the uncertainty is plotted in Figure 5.8; for most of the bands in question (all the bands
` > 2000), ν is large enough that the distribution is Gaussian to close approximation.
Then for any set of bandpowers: signal {Sb }, noise {Nb } or signal plus noise {Rb }, the
uncertainty is given by
∆Nb
∆Rb
∆Sb
=
=
=
hSb i
hNb i
hRb i
r
2
.
ν
(5.14)
This implies that since hRb i = hSb i + hNb i then ∆Rb = ∆Sb + ∆Nb and thus changing the
2. ν = (2` + 1)∆`fsky per band for an experiment that only covers the fraction fsky of the sky and bins
into bandpowers ∆`-wide.
173
150 GHz : l(l+1)/2! Cl [µK2]
7000
"CDM Input Model
Signal Bandpowers
Deck Jackknife
Scan Direction Jackknife
Season Jackknife
Focal Plane Jackknife
5000
3000
1000
100 GHz : l(l+1)Cl/2! [µK2]
−1000
5000
3000
1000
−1000
0
500
1000
1500
Multipole Moment l
2000
2500
3000
Figure 5.7: The final result of the high-` analysis: bandpowers and their uncertainty calculated from the signal plus noise plus point source simulations (black points) — illustrated
here with a single realization — and shown in comparison to the ΛCDM input model (red).
Bandpowers for the four jackknives relevant to this result (see text) are also shown for the
same realization (colored points).
174
150
150 GHz
Gaussian: #=1
100
50
100 GHz
0
100
50
0
−4
−3
−2
−1
0
! Cl/" Cl
1
2
3
4
Figure 5.8: The distribution of signal plus noise bandpower deviations (color) normalized
by the bandpower uncertainty for all bands at ` < 3000. The color-scale ranges from blue
for bands near ` ≈ 200 to red for bands near ` ≈ 3000. A Gaussian distribution with width
σ = 1 is over-plotted for comparison (black).
signal hSb i will proportionally change the bandpowers Rb and the uncertainty ∆Rb . This
has the odd implication that since the uncertainty on a model with more power is greater,
it is actually more likely than one with less power — a “high-side tail” to the probability
distribution of the model given the data.
To answer the question “is the data consistent with the model” we can calculate a χ2 for
the estimated bandpowers Cbb against the input model expectation values Eb as
χ2 = Cbb − Eb
C−1
bb0
b
Cb0 − Eb0 .
(5.15)
The bandpower deviations (Cbb − Eb )/∆Cb for a single simulation realization are plotted
against the deviations of all the simulation bandpowers in Figure 5.9. Note again that for
175
95%
150 GHz : ! Cl/" Cl
4
68%
Median
2
0
−2
−4
Single Realization
100 GHz : ! Cl/" Cl
4
Simulation
2
0
−2
−4
0
500
1000
1500
multipole (l)
2000
2500
3000
Figure 5.9: Bandpower deviations normalized by the bandpower uncertainty for all signal
plus noise simulations (grey) and one random realization (blue). The distributions are very
nearly Gaussian; the 2nd , 16th , 50th , 84th and 98th percentile points of the bandpower
distributions (yellow, green, red, green and yellow respectively) — corresponding to roughly
±2,±1 and 0 σ or encompassing 95%, 68% and the mean of a Gaussian distribution — are
shown .
comparison against any other models, the covariance matrix must be rescaled.
5.2
The High-` TT Spectra
Observations of the cosmic microwave background (CMB) anisotropy at angular scales of
several arcminutes or larger (` < 2000) have been used to constrain parameters of the
ΛCDM cosmological model to high precision (Castro et al., 2009; Dunkley et al., 2009). At
176
larger angular scales, the anisotropic power is dominated by the primary CMB from the surface of last scattering. At the smaller angular scales considered here the primary anisotropy
is exponentially suppressed by diffusion in the primordial plasma and the structure becomes
dominated by foreground emission and secondary anisotropy generated by intervening large
scale structure.
The most relevant foreground — already encountered in this analysis — are bright radio
sources, which were masked out of the map at the spectrum estimation stage. Nevertheless,
the residual radio source contribution from sources at < 5σ may still be significant and must
be considered. The secondary anisotropy most relevant at these scales is introduced by the
thermal Sunyaev-Zel’dovich effect (SZE); measuring this anisotropy has been of particular
recent interest. The magnitude of the SZE power is a sensitive and independent probe of
the amplitude of density perturbations, scaling as σ87 (Komatsu & Seljak, 2002). The QUaD
bandpowers presented here are the most sensitive to date at 100 or 150 GHz at the relevant
scales and can be used to place strong constraints on the SZE amplitude.
5.2.1
High-` TT Bandpower Results
The basic result is shown in Figure 5.10, the TT band power values at 100 and 150 GHz
extending to ` = 3000. The bandpower uncertainties are calculated from the spread in signal
plus noise simulation bandpowers (Section 5.1) assuming the input ΛCDM theory spectrum
shown. Jackknife maps made from differencing independent data sets covering the same sky
are a powerful test for systematic contamination; the bandpower values for all the jackknifes
are also shown in the figure.
In the multipole range considered for this analysis, the sky power has been suppressed by
almost an order of magnitude through beam convolution. Thus a small mis-estimate of the
beam would result in a large, multipole–dependent systematic shift in the bandpower values.
177
An under(over)-estimate of the beam suppression would result in an under(over)-estimate
of our bandpowers. The effect of the systematic uncertainties on the beam model — shown
in Figure 4.20 — and calibration (7% in power) is illustrated in Figure 5.10, which shows
the result of pushing both up/down simultaneously by 1-σ. While systematic uncertainty
is significant at both frequencies, it is not sufficient to qualitatively change the results at
150 GHz.
While it is customary to present the bandpower results with errorbars as in Figure 5.10
it is more natural to consider them as in Figure 5.11. There, instead of plotting the uncertainty as error bars, I explicitly illustrate the 16, 50 and 84% points of the signal plus noise
simulation bandpower distribution.3 This elucidates the fundamental result of the analysis
— is the data consistent with being a realization of the simulation model? The distribution of both the signal plus noise (ΛCDM only) and the signal plus noise plus radio sources
(ΛCDM +pntsrc) simulations are shown.
The bandpowers presented in Figures 5.10 & 5.11 were calculated after masking bright
point sources in the maps. The effect of masking them is shown Figure 5.11 as the difference
between the light grey and black points. Simply masking the brightest sources (> 5σ) is
enough to bring our data into broad agreement with ΛCDM alone and at both frequencies
the residual radio source power is small, insignificant at 150 GHz.
The data shown in Figures 5.10 & 5.11 are listed explicitly in Tables 5.1 & 5.2. The
residual radio source power is calculated as the mean difference between the full signal plus
noise plus point source (ΛCDM +pntsrc) and signal plus noise only (ΛCDM ) simulation
pntsrc
bandpowers Cbb
= hCbb iM C − hCbblcdm iM C .
3. Equivalent to the -1, 0, +1 σ for a Gaussian distribution.
178
150 GHz : l(l+1)/2! Cl [µK2]
600
"CDM Input Model
Signal Bandpowers
Deck Jackknife
Scan Direction Jackknife
Season Jackknife
Focal Plane Jackknife
400
200
0
−200
100 GHz : l(l+1)Cl/2! [µK2]
400
200
0
−200
−400
2000
2250
2500
Multipole Moment l
2750
3000
Figure 5.10: High-` TT bandpower values at 100 GHz (bottom) and 150 GHz (top) for
signal and relevant jackknives compared to the ΛCDM input model. The error bars include
ΛCDM sample variance, residual radio source foreground sample variance and noise. The
triangles indicate the effect of a coherent shift to the 1σ confidence limits of the beam and
absolute calibration uncertainty.
179
150 GHz : l(l+1)/2! Cl [µK2]
600
"CDM Expectation
"CDM+pntsrc Expectation
Signal Bandpowers
Unmasked Bandpowers
400
200
0
−200
100 GHz : l(l+1)Cl/2! [µK2]
400
200
0
−200
−400
2000
2250
2500
Multipole Moment l
2750
3000
Figure 5.11: High-` TT bandpower values at 100 GHz (bottom) and 150 GHz (top) for the
real bandpowers compared to the spread in the signal plus noise simulations (in place of
the error bars in Figure 5.10) for ΛCDM alone and ΛCDM plus a residual radio source
foreground (ΛCDM +pntsrc). Bandpowers estimated with (black) and without (grey) source
masking are shown for comparison.
180
Table 5.1. 100 GHz Bandpower Results
Bandcenter
Bandpowers
Bandpower
ΛCDM
Residual Radio
Multipole-`
`(`+1)
2
2π C` [µK ]
Uncertainty
Expectation
Source Power a
215.69
218.25
173.38
26.06
195.66
110.96
256.75
64.58
158.62
-241.33
-375.80
-582.07
44.63
47.43
51.65
64.47
77.88
89.45
108.40
142.37
164.06
203.13
248.61
307.18
216.03
175.00
131.69
113.77
103.22
85.06
69.05
61.06
53.72
43.18
34.32
29.35
13.47
13.75
14.28
16.04
18.43
18.20
20.23
22.85
25.97
27.72
29.45
32.37
2067
2148
2229
2310
2391
2472
2553
2634
2715
2796
2877
2958
Table 5.2. 150 GHz Bandpower Results
Bandcenter
Bandpowers
Bandpower
ΛCDM
Residual Radio
Multipole-`
`(`+1)
2
2π C` [µK ]
Uncertainty
Expectation
Source Power a
177.78
211.06
132.88
92.17
101.91
71.13
109.44
98.35
43.75
39.91
21.11
-7.21
27.30
25.64
22.75
24.61
24.28
24.41
24.27
27.98
32.08
33.81
36.55
39.03
215.50
173.84
130.48
113.07
102.71
84.42
68.46
60.67
53.34
42.73
33.92
29.06
3.39
3.53
3.47
4.04
4.56
4.81
4.42
4.89
4.69
5.20
5.12
5.99
2067
2148
2229
2310
2391
2472
2553
2634
2715
2796
2877
2958
a The residual radio source power has already been subtracted from bandpowers,
181
Table 5.3. χ2 Tests for Bandpowers vs. the Simulation Distribution
100 GHz
Spectrum Test
ΛCDM only
ΛCDM + pntsrc
Deck Jackknife
Scan Direction Jackknife
Split Season Jackknife
Focal Plane Jackknife
150 GHz
χ2 / dof
PTEa
χ2 / dof
PTEa
18.2
16.4
19.1
8.9
9.9
15.5
0.11
0.16
0.09
0.70
0.63
0.21
12.0
11.3
8.3
16.2
11.9
25.4
0.44
0.49
0.76
0.20
0.46
0.02*
/
/
/
/
/
/
12
12
12
12
12
12
/
/
/
/
/
/
12
12
12
12
12
12
a Asterisks denote values below 0.05 and considered failures.
5.2.2
Agreement with Simulation
The consistency between the bandpowers and the input model is quantified by calculating χ2
between the data and the simulation distribution using equation 5.15 with the bandpower
covariance matrix given by equation 5.12. Since we are testing for consistency with the
simulations, the model expectation value is simply given by the simulation mean, i.e. Eb =
hCbb iM C — the solid lines in Figure 5.11.4 For the jackknives, the expectation values are null:
EbJ = 0. This calculation is repeated for all the simulations to construct a χ2 probability
distribution and a probability to exceed (PTE) is determined by integrating the function
from the data χ2 value to infinity.
The resulting χ2 and PTE values are provided in Table 5.3; both the ΛCDM only and
ΛCDM +pntsrc expectation values are considered for the real bandpowers. The bandpowers
are completely consistent with ΛCDM alone and there are no problems with the jackknives
— a single failure is expected out of twenty and moreover, the focal plane jackknife is the
least meaningful for this analysis. There is a small reduction in the χ2 and a corresponding
4. Moreover, the signal has been forced to the model expectation via the filter/beam correction.
182
95%
150 GHz : ! Cl/" Cl
4
68%
Median
2
0
−2
−4
Real
100 GHz : ! Cl/" Cl
4
Simulation
2
0
−2
−4
2000
2250
2500
multipole (l)
2750
3000
Figure 5.12: Bandpower deviations for the real bandpowers (blue) compared against all signal
plus noise simulations (grey).
improvement in the PTE values when including point sources in simulations. The bandpower
deviations are plotted in Figure 5.12, at 100 GHz there is a hint of a correlated excess, ∼ 1σ
at most.
183
CHAPTER 6
A COSMOLOGICAL CONTEXT
A great man once said, and I paraphrase: “Theories come and go, but the bandpowers stand
forever.” That being said, bandpowers at 2000 < ` < 3000 alone aren’t much to look at.
Therefore I devote this last chapter to framing them in a broader cosmological context.
The results presented in Section 5.2 are consistent with the ΛCDM expectation as quantified by the χ2 tests presented in Table 5.3, with a marginal improvement when including the
contribution from residual radio point sources. Nevertheless, this data can still be used to
place constraints on any potential secondary anisotropy and as previously stated, the most
compelling and relevant of these in the multipole range considered here is the thermal SZE.
In this chapter I first broadly review the current state of theoretical SZE anisotropy
spectrum predictions in Section 6.1, before fitting the de facto standard templates to the
QUaD data in Section 6.2. Finally in Section 6.3 I discuss the relationship between QUaD
and existing results.
6.1
SZE Secondary Anisotropy Spectra
The thermal Sunyaev-Zel’dovich effect (SZE, Sunyaev & Zeldovich, 1972) is a spectral distortion in the CMB radiation field imprinted by the hot intra-cluster medium (ICM) of foreground galaxy clusters. As the cold, CMB, blackbody photon-distribution passes through the
ionized gas of the ICM it undergoes Thompson scattering, cooling the thermalized electrons.
The photons are preferentially redistributed to higher energies by a factor of ∼ kB Te /me c2 ,
where Te and me are the electron distribution temperature and the electron mass respectively. This results in a decrement in the CMB temperature (or intensity) at low frequency
and an increment at high frequency with a null at ∼ 220 GHz; the spectral distortion is
illustrated in Figure 6.1.
184
Figure 6.1: Illustration of the CMB spectral distortion due to the SZE. The dashed line is
the 2.73 K blackbody spectrum and the solid line is the resulting SZE distorted spectrum.
Figure from Carlstrom et al. (2002)
185
The resulting change in the Raleigh-Jeans temperature is given by
∆TSZE
= f (x)
TCM B
Z
k Te
ne B 2 σT dl,
me c
(6.1)
where ne is the electron number density, σT is the Thompson cross-section and the integral
is along the line-of-sight, l. The prefactor f (x) encapsulates the frequency dependence
of the distortion where x is a dimensionless frequency x ≡ hν/kB TCM B The integrand in
equation 6.1 is entirely dependent on the properties of a given cluster; it is commonly referred
to as the Compton-y parameter. The corresponding change in the intensity is written as
∆ISZE = g(x)I0 y.
(6.2)
The temperature and intensity frequency dependence functions (plotted in Figure 6.2) are
given by the two expressions,
ex + 1
−4
f (x) = x x
e −1
x4 e x
g(x) =
f (x).
(ex − 1)2
6.1.1
(6.3)
(6.4)
Analytic Estimates
Since the SZE is a distortion of the spectrum itself — preserving the total number of photons
— there is effectively no redshift dependence in the signal itself. This makes it an incredibly
powerful tool for studying large-scale structure at high-redshift (Carlstrom et al., 2002).
Clearly the Compton-y parameter for a cluster of a given mass will evolve in time as the
cluster virializes. Moreover, from a cosmological perspective, the number density of objects
of a given mass dn/dM (mass function) will evolve as well. Thus the SZE power distribution
186
8
6
Temperature Frequency Dependence
Intensity Frequency Dependence
4
2
0
−2
−4
−6
0
50
100
150
200
250
300
Frequency [GHz]
350
400
450
500
Figure 6.2: The temperature and intensity frequency dependence of the SZE. Note that the
SZE temperature signal is ∼ 2 times greater at 30 GHz than it is at 150 GHz, resulting in
nearly a factor ∼ 4 in the power spectrum.
on the sky — the secondary anisotropy — is an integrated quantity that depends on these
cosmological processes. The power spectrum can be estimated analytically via the expression
C` = f (x)2
Z zmax
0
dV
dz
dz
Z Mmax
dM
Mmin
dn
|ỹ` (M, z)|2
dM
(6.5)
where V (z) is the comoving volume of the universe at z per steradian and ỹ` is the FT of
the Compton-y parameter, which depends on the evolution of clusters with redshift.
The expression in equation 6.5 is generically a halo model approach. The mass function
term can be estimated analytically using Press Schechter formalism (Press & Schechter,
1974) or fit to N-body simulations (Jenkins et al., 2001), while the Compton-y parameter
187
will depend on non-trivial gas physics that determine the density, pressure and temperature
profiles of halos.
The “gold-standard” analytic estimate comes from Komatsu & Seljak (2002); they take
the N-body approach to the number distribution and adopt an NFW (Navarro et al., 1997)
density profile. They further assume that the baryonic gas is in hydrostatic equilibrium
Ω
with the dark matter, and also tracing it so that ρb = Ω b ρm . Finally, they use a constant
m
polytropic index pressure profile, which is only a good approximation outside the central core
of a cluster, and ignore any corrections for AGN feedback or star formation. The former
assumption likely makes a negligible impact for the angular scales considered here (` ∼ 3000)
whereas the latter will only result in a fractional change in the Compton-y amplitude.
6.1.2
Simulations
An alternative to analytical estimates is to directly simulate the SZE sky and such simulations tend to come in two flavors. The first, most computationally taxing, though most
likely physically accurate method involves full-blown hydrodynamical (hydro) simulations
which track both the evolution of dark and baryonic matter from initial density perturbations. These come in predominantly two flavors, Moving-Mesh Hydrodynamical (MMH)
and Smoothed Particle Hydrodynamical (SPH). The hydro simulations can include rules for
cooling, star formation, AGN feedback and metal enrichment to directly test the role of these
processes on the gas profiles in halos.
The second substantially quicker method involves “painting” gas profiles onto darkmatter-only simulations, which are much quicker. Again, there are two flavors: standard
N-body and PINOCCHIO (Monaco et al., 2002), which uses Lagrangian perturbation theory
to calculate collapse times for linear density perturbations and builds halo merger histories.
In this case, the gas profiles are added to identified halos akin to the analytic estimates,
188
requiring parameterization of relevant physical processes.
Both hydro and N-body simulations tend to have (28±1 )3 particles (for gas and dark
matter each in hydro simulations) and box-sizes of several hundred h−1 Mpc on a side. The
SZE sky maps are constructed by stacking redshift slices as viewed along multiple lines-ofsight. The halos are identified and their gas profiles are converted into projected Compton-y
parameter maps. This process is laborious requiring long runs even on supercomputers
so simulations will typically only be run for a fixed set of cosmological parameters. The
exception to this is PINOCCHIO, which can be run on a desktop computer over several
hours allowing for a broader sampling of parameter space.
6.1.3
Spectra Properties
Over the past decade there have been a number of SZE simulations and analytic estimates;
a non-exhaustive list is provided in Table 6.1, which are also the subject of Figures 6.3–6.5
as indicated.
Komatsu & Seljak (2002) confirm a known dependence of the spectral amplitude on the
cosmological parameters σ8 and Ωb such that
C` ∝ σ87 (Ωb h)2 ,
(6.6)
for σ8 ≈ 1. This is not surprising considering the two main components of equation 6.5: the
SZE Compton-y parameter is driven by the gas physics which will depend on the baryon
density, and the mass function is strongly dependent on the amplitude of initial density
perturbations. The amplitude and shape of the spectrum is only very weakly dependent on
other parameters.
After scaling by equation 6.6 they find good agreement between their spectra and simulations, particularly with the highest resolution simulations of Zhang et al. (2002), and with
189
Table 6.1. SZE Power Spectra Estimation References
Methoda
Reference
Refregier et al.
Refregier & Teyssier
da Silva et al.
Seljak et al.
Springel et al.
Zhang et al.
White et al.
Komatsu & Seljak
Schulz & White
Bond et al.
Bond et al.
Holder et al.
Shaw et al.
(2000)
(2000)
(2000)
(2001)
(2001)
(2002)
(2002)
(2002)
(2003)
(2005)
(2005)
(2007)
(2008)
Figure(s)
MMH
AMRH
SPH
MMH
SPH
MMH
SPH
Analyticb
N-body
SPH
MMH
PINOCCHIOc
N-body
6.3
6.3
6.3
6.3
6.3, 6.4
6.3
6.3, 6.5
6.3
6.5
6.4
6.4
6.5
6.5
σ8
Ωb
h
0.80
0.93
0.90
0.80
0.90
1.09
0.90
0.80
1.00
0.90, 1.00
1.00
0.60–1.00
0.77
0.049
0.039
0.038
0.049
0.040
0.050
0.040
0.044
0.041
0.041
0.050
—
0.044
0.70
0.70
0.71
0.70
0.67
0.70
0.70
0.72
0.70
0.70
0.70
0.70
0.72
a The simulation codes are: MMH - Moving-Mesh Hydrodynamic, SPH - Smooth Particle
Hydrodynamic, and AMRH - Adaptive Mesh-Refinement Hydrodynamic.
b The parameter values stated for Komatsu & Seljak (2002) are those from the WMAP 5
yr template used in Section 6.2.
c Monaco et al. (2002)
190
all the spectra at multipoles of ` ∼ 3000 (see Figure 6.3). At large angular scales some
simulations show a lack of power in comparison to the analytic estimate, which is attributed
to the small sample sizes in the highly non-gaussian multipole region. Also, they neglect
sub-halo terms in their estimate which is the likely source of the small angular scale disagreement with the SPH simulations of Springel et al. (2001). This is not likely to be due to
any particularities in the SPH vs. MMH methods as both da Silva et al. (2000) and Zhang
et al. (2002) agree well.
Bond et al. (2005) take a more systematic approach and run both their own SPH and
MMH simulations along with making an analytic estimate, somewhat different from Komatsu & Seljak (2002), shown in Figure 6.4. The simulations show good agreement amongst
themselves, again reproducing the scaling of equation 6.6, as well as agreement with the
SPH simulations of Springel et al. (2001). Furthermore, the analytic estimates — which in
their case are fairly independent of simulations and use the Press-Schechter approach — also
show excellent agreement, though (by their own admission) there is considerable fine-tuning
to make this so.
Yet, though there is broadly good agreement, there is still at least a factor two systematic
uncertainty in the theoretical and simulation models, stemming from differences in both
amplitude and shape. Perhaps the clearest treatment of this can be found in Sharp et al.
(2009) and is shown here in Figure 6.5. There, four sets of simulated sky maps spanning
values of σ8 from 0.6 to 1.0 are passed through simulations of the SZA observations to
produce bandpower estimates. The result clearly shows the systematic uncertainty in the
simulations, These differences are most likely due to the treatment of physical processes and
halo profiles. For instance White et al. (2002) show that by including star formation, cooling,
metallicity and AGN feedback the amplitude of the spectra first estimated in Springel et al.
(2001) were suppressed by ∼ 50%. Furthermore, Holder et al. (2007) show that feedback
has a strong effect on the shape of the spectrum, strongly suppressing small angular scale
191
Figure 6.3: Figure from Komatsu & Seljak (2002) Analytically estimated SZE angular power
spectra (solid) compared with simulations (dashed) from: Seljak et al. 2001 (SBP01), Refregier et al. 2000 (RKSP00), da Silva et al. 2000 (daSilva), Zhang et al. 2002 (ZPW02),
Springel et al. 2001 (SWH01), and Refregier & Teyssier 2000 (RT00). The dotted lines indicate r.m.s. fluctuations on the simulated power spectra. The bottom-right panel scales all
the power spectra by their respective σ87 (Ωb h)2 (see Table 6.1).
192
Figure 6.4: Figure from Bond et al. (2005) Top: The SZE spectra for a σ8 = 1.0 SPH
2563 200 Mpc (cyan) and an MMH 5123 140 Mpc (red) simulation compared to the power
spectrum from a σ8 = 0.9 SPH 2563 200 Mpc run (yellow) scaled using equation 6.6. Middle:
The σ8 = 0.9 5123 400 Mpc (cyan) and 2563 200 Mpc (magenta) SPH simulations compared
to the spectra derived from Springel et al. (2001) (yellow). Bottom: MMH results (black dotdash) compared to analytic halo model estimates with several parameter values irrelevant
here.
193
power.
For QUaD the simulated SZE sky maps are of little use, the simulation box-sizes constrain
the maps to . 2◦ ×2◦ , far too small for the ∼ 60 sq. deg. CMB field. The only recourse is to
fit directly to the estimated spectra, which doesn’t account for the intrinsic non-Gaussianity
of the SZE signal, though in practice this is small at . 10% in spectrum amplitude at the
multipole range of interest for the QUaD field size. The systematic variations in SZE spectra
are much more important: the factor two in spectrum amplitude translates into a ∼ 10%
uncertainty on the inferred σ8 . At present, high-` CMB data are more useful in calibrating
the SZE simulations than the SZE simulations are at constraining σ8 .
6.2
Determining a Best-Fit SZ Template Amplitude
In this section I use the QUaD high-ell TT bandpowers to constrain a best fit SZ template amplitude ASZ for the Komatsu & Seljack (KS) model derived for the WMAP 5-year
cosmology used as our simulation input model.1
For this analysis I treat the 100 and 150 GHz bandpowers as two correlated data sets,
fitting each set individually and the two simultaneously (not the cross spectra). I use the
ΛCDM plus point source simulations and subtract the expectation values from the real
bandpowers to isolate the bandpower excess:
Cbbex = Cbb − hCbb iM C .
(6.7)
Assuming the excess is solely due to secondary anisotropy from SZE, I fit the KS template
convolved by the bandpower window functions and scaled to the appropriate bandpower
frequency, Cbks , where the SZE scaling with frequency is given by equation 6.3. The only
1. Template from LAMBDA http://lambda.gsfc.nasa.gov/product/map/dr3/pow sz spec get.cfm
194
400
2
Simulated Power in 43 Fields [µK ]
350
300
250
200
150
100
50
0
!50
0.5
0.6
0.7
0.8
σ8
0.9
1
1.1
Figure 6.5: Figure from Sharp et al. (2009) The mean and scatter of the maximum likelihood
power for simulated realizations of the SZA measurement using several sets of 50 simulated
Compton-y-maps made with a specific technique and a specified σ8 — Holder et al. (2007)
is shown with solid dots, while triangles represent Shaw et al. (2008), White et al. (2002)
and Schulz & White (2003) (see Table 6.1 for more info). The points at σ8 = 0.9 and 1.0
are offset slightly for clarity. The error bars reflect the scatter of the realizations, and thus
include the additional sample variance from the non-Gaussianity of the y-maps. The dashed
line and shaded region represent the maximum likelihood power for the secondary CMB
anisotropy measured by the SZA.
195
parameter left free in the fit is a scale parameter ASZ , which can be related back to σ8 via
the empirical scaling relation,
1/7
σ8 = 0.8ASZ .
(6.8)
The bandpower covariance matrix is calculated from the distribution of simulated bandpowers and incorporates the random instrumental noise — in practice uncorrelated between
frequencies — and the correlated ΛCDM plus residual radio source foreground sample variance. Though at these multipoles, the instrumental noise dominates the covariance matrix
and the 100 and 150 GHz bandpowers are actually only weakly correlated.2 . Additionally,
the beam model and absolute calibration systematic uncertainties are significant at these
multipoles and must be included in the analysis. A systematics term is incorporated into
the covariance matrix as
C0bb0 = Cbb0 + δη 2 (δBb hCbb iM C )(δBb0 hCbb0 iM C ),
(6.9)
where Cbb is the estimated bandpower value, δη is the fractional absolute calibration uncertainty and δBb is the fractional beam uncertainty.
The systematic term scales with the bandpower values and is correlated between the
bandpowers at a given frequency but it is not clear to what extent it will be correlated
between frequencies. The beam model is constructed using the same methodology and data
for both frequencies. Furthermore as we have seen in the previous section it is not driven
by random noise but rather gross fluctuations in the state of the optics i.e., cone expansion,
pointing wander, etc. Therefore it is likely that a systematic error in the beam would be
correlated between frequencies.3 The uncertainty on the absolute calibration is roughly
split between the inherited uncertainty on the B03 calibration and noise in our methods.
2. In practice this correlation is undetectable with the 500 simulations
3. This is certainly true for the main-lobe, which dominates the uncertainty at these scales.
196
Therefore I consider both cases, though the final results are insensitive to this correlation.
Given the bandpower residuals and their covariance, I can calculate a naive χ2 against
the KS template for a range of finely sampled ASZ values where,
ex − A
ks .
b
χ2 = Cbbex − ASZ · Cbks C0−1
C
·
C
SZ b0
b0
bb0
(6.10)
As noted, the bandpowers are drawn from either the 100 GHz set, the 150 GHz set or both.
The resulting likelihood distribution is given by,
2
P Cbbex | ASZ · Cbks = e−χ /2
(6.11)
the best fit value of ASZ is found at the maximum likelihood and the uncertainty where
∆χ2 = 1.4 The function is very nearly Gaussian so the ±1σ uncertainties are equivalent. I
calculate upper limit as the 95th percentile of the likelihood multiplied by a flat prior,

 1 forA

SZ ≥ 0
.
P ASZ =

 0 forASZ < 0
(6.12)
Note that the covariance matrix Cbb0 is accurate for the input ΛCDM model only and
that in principle the signal component of the covariance depends on the amplitude of the
model expectation. There is already a well established procedure for incorperating this phenomenon into model fitting using a transformation of variables which assumes the bandpower
expectation covariance is well described by an offset log-normal distribution. Yet this ansatz
is tailored for an analysis that naturally provides the “probability of the model given the
data”. The QUaD analysis provides the “probability of the data given the model”, i.e. the
signal plus noise bandpower covariance matrix estimated from simulations based on an input
4. The 16th and 84th percentile points of the for a Gaussian likelihood distribution.
197
model. But in this case the covariance matrix can just be directly scaled as the assumed
model is changed. Before I introduce these complications, I first derive the results for the
naive method.
6.2.1
Naive Results
The resulting likelihood functions — assuming random noise only and including all systematics — are shown in Figure 6.6. The best fit ASZ , 1-σ uncertainties and 95% CL upper limits
are provided in Table 6.2. The first thing to note is that the uncertainty on the amplitude
is surprisingly similar at the both frequencies. This is because a change in the value of ASZ
results in a larger bandpower shift at 100 GHz than at 150 GHz, roughly by a factor of 2.5.
Since the bandpower uncertainty at 100 GHz is roughly twice the uncertainty at 150 GHz —
at least for the lower multipoles — the width of the likelihood distributions appear similar.
Also, as claimed, the effect of going from an uncorrelated to a correlated absolute calibration
uncertainty is negligible in the best-fit parameters.5
Clearly the primary effect of the systematic uncertainties is to broaden the likelihood
distributions significantly. Perhaps surprisingly, the correlated systematic effects have also
actually moved the 150 GHz likelihood distribution so that the likelihoods for the two frequencies are in very good agreement. At first pass it isn’t particularly clear why this would
be the case, but inspecting the actual excess bandpowers and covariance matrices shown
in Figure 6.7 is revealing. The effect of including the systematics is firstly to downweight
the lower multipole points; since the model expectation is greater there, they are the most
susceptible to the systematics (see equation 6.9). Secondly, the systematic term also correlates the bandpowers, again more significantly at lower multipoles. The net effect is to
strongly suppress the lower multipole bandpowers and thereby increase the influence of two
5. The beam is always assumed to be correlated.
198
!
8
Likelihood
1
0
0.8
0.88
0.94
0.98
1.01
0
ASZ
1
2
3
4
5
100 GHz
150 GHz
Combined
0.5
0
Likelihood
1
100 GHz
150 GHz
Combined
0.5
0
−5
−4
−3
−2
−1
Figure 6.6: Naive Results: the likelihood distributions for the Komatsu & Seljak (2002)
SZE template amplitude ASZ fit to the 100 and 150 GHz data separately and to both
simultaneously. The sample variance and instrumental noise bandpower covariance matrix
is used exclusively (top panel) and then including systematic uncertainty from the beam
model and absolute calibration using a naive method to calculate the likelihood (see text).
The top axis corresponds to the equivalent σ8 for a given value of ASZ below; negative
values of σ8 are not physical. The vertical bars denote the 95% upper-limits (see text).
199
Table 6.2. ASZ Best-Fit Values: Naive Likelihood
Bandpower Covariancea
random only
random + abscal
random + beam
random + beam and abscal
random + abscal (corr)
random + beam and abscal (corr)
100 GHz
MLb
ULb
0.6+1.1
−1.1
0.6+1.1
−1.1
0.6+1.2
−1.2
0.5+1.4
−1.4
0.6+1.1
−1.1
0.5+1.4
−1.4
2.5
2.6
2.7
3.1
2.6
3.1
150 GHz
ML
UL
0.0+0.9
−0.9
0.2+1.1
−1.1
0.2+1.2
−1.2
0.4+1.5
−1.5
0.2+1.1
−1.1
0.4+1.5
−1.5
1.8
2.3
2.5
3.3
2.3
3.3
Combined
ML
UL
0.3+0.7
−0.7
0.4+0.8
−0.8
0.5+1.0
−1.0
0.6+1.2
−1.2
0.4+0.9
−0.9
0.7+1.2
−1.2
1.6
1.9
2.2
2.7
2.0
2.8
a Random includes instrumental noise plus sample variance only, abscal is the absolute
calibration uncertainty — which may be correlated — and beam is the beam uncertainty.
See text for how systematics are included.
b ML denotes the maximum likelihood value plus/minus ∆χ2 = 1 intervals; UL denotes
the 95% confidence upper limit using the likelihood for positive ASZ only.
200
particularly high values at ` ∼ 2600.
6.2.2
Offset Log-Normal Transformation
As stated before, a proper treatment of the χ2 or likelihood calculation must take into
account the change in the signal component of the covariance matrix Cbb0 as ASZ is varied.
The standard way of doing this is to use an offset log-normal transformation described in
Bond et al. (2000) (BJK), where they express the curvature of the Fisher matrix for an
observation ∆ given the model C` for a full sky experiment with uniform noise N` and a
finite beam B` as,
−2
∂ 2 P(∆ | C` ) 2
∝ C` + N` /B`
δ``0
F``0 = −
∂C` ∂C`0
(6.13)
Therefore, as already has been stated repeatedly, upwards fluctuations are biased by higher
uncertainty, i.e.
δC` ∝ C` + N` /B`2 .
(6.14)
Yet, there is no such bias in a variable Z` such that
δZ` ∝
δC`
C` + N` /B`2
(6.15)
Hence the data, model and covariance matrix become
CbbZ = ln(Cbb + xb )
C Z = ln(hCb i
+A
b
CZ
bb0
b MC
ks
SZ · Cb + xb )
(6.16)
= C−1
(Cb + xb )(Cbb0 + xb0 )
bb0 b
where the “x-numbers” xb are related to the noise; for the above example xb = N` /B`2 .
201
Figure 6.7: The residual bandpowers after subtracting the ΛCDM +pntsrc simulation mean
for the 100 and 150 GHz bandpowers (left, first and second panels from top respectively, black
points) plotted versus the best fit KS SZE template expectation and uncertainty (red solid
and dashed respectively). The bandpower uncertainty is derived from the diagonal elements
of the covariance matrix (right) for random noise alone (top) and with both beam and abscal
uncertainty included (bottom). The assumed systematics have the double effect of increasing
the uncertainty and correlation between bandpowers.
202
For a realistic experiment BJK give
−1/2
(F
)signal
C`
= ``
,
−1/2
x`
(F`` )noise
(6.17)
which for the QUaD analysis should be equivalent to
∆Sb
hSb iM C
=
,
xb
∆Nb
(6.18)
where the variables above have the same meaning as they do in the discussion related to
equation 5.14 and implies that xb = hNb iM C .
The only remaining issue is how to include the systematic uncertainty, for which there
are no clearly established guidelines6 . So for this analysis I include systematic contributions
prior to transforming the covariance matrix into the offset log-normal form; it’s not clear
that this is correct. Using these transformations, χ2 is given by
Z
bZ
χ2 = (CbbZ − CbZ ) CZ
bb0 (Cb0 − Cb0 ).
6.2.3
(6.19)
Scaling Signal Variance
Alternatively, given the nature of the QUaD analysis pipeline, it should be simple to modify
the covariance matrix directly to account for the change in the model. In this case, the
covariance matrix is rescaled by the relative increase in signal plus noise power; this is valid
because the bandpowers are χ2 distributed (see equation 5.14). For a given value of ASZ
we can calculate the scale factor as the ratio of the new signal plus noise plus point source
6. Given more time, I would have been happy to figure this out but the next section renders the problem
moot anyway
203
plus KS SZE model to the old signal plus noise plus point source simulation expectation as,
hCbb iM C + ASZ · Cbks
Xb =
.
hCb i
(6.20)
b MC
Then we can simply rescale to get the new covariance matrix
CSZ
bb0 = Xb Xb0 Cbb0
(6.21)
The systematic contributions are unaffected and simply add into the new covariance matrix
as in equation 6.9. Finally, the bandpower excess and χ2 are calculated as they were in
equations 6.7 and 6.10.
6.2.4
Final Results and Implications for σ8
The resulting amplitudes from the naive, offset log-normal and covariance rescale procedures
— including all systematics in the covariance matrix — are listed in Table 6.3, and the naive
method is compared to the covariance rescale in Figure 6.8. The offset log-normal and
covariance rescale procedures agree well with each other despite the unreconciled problem of
including the systematic term in the former approach. Both methods prefer higher values of
ASZ than the naive method as expected; note that the uncertainty on ASZ is unchanged,
only the maximum likelihood value shifted. Of course, the non-Gaussian methods are more
formally correct. The final amplitudes are all entirely consistent with zero; the tighter
constraints and multifrequency information from the combined fit make it the relevant final
constraint with ASZ = 1.2+1.2
−1.1 .
The final constraints on the implied σ8 for this model (using a scaled covariance analysis
including all systematics) are derived from the likelihood distributions shown in Figure 6.9,
evaluated in σ8 -space where they are highly non-Gaussian. Small values of σ8 (< 0.6) are
204
!
8
Likelihood
1
0
0.8
0.88
0.94
0.98
1.01
0
ASZ
1
2
3
4
5
100 GHz
150 GHz
Combined
0.5
0
Likelihood
1
100 GHz
150 GHz
Combined
0.5
0
−5
−4
−3
−2
−1
Figure 6.8: Rescaling the Covariance: this figure should be compared with Figure 6.6. The
likelihood is first calculated naively assuming a fixed covariance (top, identical to the bottom
panel in Figure 6.6) and is then rescaled to incorporate the change in sample variance
(bottom). Systematic uncertainty from the beam model and absolute calibration are included
in both likelihood estimates.
205
Table 6.3. ASZ Best-Fit Values for All Likelihood Methods
Likelihood Methoda
naive
offset-lognormal
scaled covariance
100 GHz
MLb
ULb
0.5+1.4
−1.4
1.0+1.4
−1.4
0.9+1.4
−1.4
3.1
3.7
3.6
150 GHz
ML
UL
0.4+1.5
−1.5
1.1+1.5
−1.5
1.0+1.5
−1.5
3.3
3.9
3.8
Combined
ML
UL
0.6+1.2
−1.2
1.3+1.2
−1.1
1.2+1.2
−1.1
2.7
3.4
3.3
a All use a bandpower covariance matrix that includes instrumental noise,
sample variance and uncertainty on both the (uncorrelated) absolute calibration and beam.
b ML denotes the maximum likelihood value plus/minus ∆χ2 = 1 intervals;
UL denotes the 95% confidence upper limit using the likelihood for positive
ASZ only.
equally likely whereas large values (> 1) are completely ruled out (see Readhead et al.,
2004, for a similar discussion). The consequence is an agreement between both the best
fit values and errorbars implied by the ASZ results and the σ8 likelihood distributions but
with different values for the upper limits. As is clear from Figure 6.9, it is only reasonable
to quote an upper limit on σ8 derived via this analysis and all three fits yield very similar
upper limits of σ8 . 0.8.
Again, this is only a limit on σ8 in the context of the KS template. Such a constraint
is dependent on the still highly uncertain input SZ templates, which can vary by a factor
two in ASZ amongst different analyses and translates to a ∼ 10% systematic uncertainty
on σ8 (Sharp et al., 2009; Sievers et al., 2009). Furthermore, several effects neglected in
this analysis can have a significant impact on a σ8 constraint determined by fitting template
spectra. For instance, if bright SZE clusters are correlated with bright radio sources that
reside within them (Coble et al., 2007) then masking these bright sources will tend to suppress
the SZE power, weakening this constraint. On the other hand, if the residual radio source
206
!
8
0.2
Likelihood
1
0.4
0.6
0.8
1
0.01
0.13
ASZ
1
4.77
100 GHz
150 GHz
Combined
0.5
0
Likelihood
1
100 GHz
150 GHz
Combined
0.5
0
0
Figure 6.9: The likelihood distributions for σ8 implied by the Komatsu & Seljak (2002)
SZE template fit to the 100 and 150 GHz data separately and to both simultaneously. The
sample variance and instrumental noise bandpower covariance is used including systematic
uncertainty from the beam model and absolute calibration. The likelihood is calculated
naively assuming a fixed covariance (top) and is then rescaled to incorporate the change in
sample variance. The vertical bars denote the 95% upper-limits (see text).
207
population is clustered, or IR sources are important despite no significant detections in our
maps, then our bandpowers are overestimates and the constraint derived above would be
conservative. The non-Gaussian nature of the SZE distribution on the sky may also play a
factor in degrading the uncertainty on the template amplitude, though this is unlikely to be
as significant for the QUaD field size. This constraint should be taken with these important
caveats in mind.
6.3
Consistency with Recent High-` Experiments
Figure 6.10 shows the QUaD High-` TT results along with measurements from other recent
CMB experiments. There is broad agreement between the various data sets — amongst each
other and with ΛCDM — except at the highest multipoles. This is where CBI (Sievers et al.,
2009) claims a significant excess, which was “confirmed” by ACBAR (Reichardt et al., 2009)
at 1-σ, whereas SZA (Sharp et al., 2009) is consistent with zero power. This intimation of
excess power has been attributed to SZE signal and used to derive corresponding constraints
on the amplitude of density perturbations σ8 . Depending on the template used, the CBI
(and to a lesser extent ACBAR) data imply values in the range 0.9 < σ8 < 1.0. This is
a departure from the the WMAP 5-year results (Dunkley et al., 2009) and recent X-ray
measurements of the cluster mass function (Vikhlinin et al., 2009), which both yield values
of σ8 ≈ 0.8.
The results in Section 6.2 are consistent with SZE power at the expected level for σ8 = 0.8
but inconsistent with those of Sievers et al. (2009) that prefer a much higher scaling of
ASZ = 3.5 ± 1.3, a best-fit value that lies outside the QUaD 95% confidence upper limit.
Yet, there is no such tension with the conclusions of Sharp et al. (2009). For a broad band
2
2
centered on ` = 4066 they find an excess of 14+71
−62 µK or a 95% upper limit of 149 µK .
This is in good agreement with the QUaD best-fit KS SZE template expectation of ≈ 40 µK2
208
4
10
! =1.0
8
!8=0.8
"CDM
3
l(l+1)Cl/2# [µK2]
10
2
10
WMAP5
QUaD
ACBAR
1
10
QUaD
150 GHz
QUaD
100 GHz
SZA
30 GHz
CBI
0
10
500
1000
1500
2000
2500
Multipole Moment l
3000
3500
4000
4500
Figure 6.10: The QUaD high-` TT results for 2000 < ` < 3000 compared against recent
results from ACBAR (Reichardt et al., 2009), CBI (Sievers et al., 2009) and SZA (Sharp et al.,
2009) — spanning the spectral range 30, 100 and 150 GHz (red, green and blue respectively)
— plus WMAP (Hinshaw et al., 2009) (black) and QUaD (Brown et al., 2009) for ` < 2000
(grey). For QUaD, SZA and CBI the estimated residual radio source contribution has been
subtracted. Note that negative data has been excluded from this log-plot (inadvertently
overemphasizing high values) and some points have been slightly offset in multipole for
clarity. The data are plotted against ΛCDM alone and ΛCDM plus the standard Komatsu
& Seljak (2002) template assuming two values of σ8 scaled to all three frequencies. The
QUaD data favors the σ8 = 0.8 model with a best-fit scaling of ASZ = 1.2+1.2
−1.1 (see text).
209
for this multipole at 30 GHz but is not compatible with the ≈ 140 µK2 expectation that is
implied by the CBI result.
Moreover, Sharp et al. (2009) also make a strong argument that the CBI point source
estimate is in fact too small. Point-source contamination is a critical concern for lower
frequency small angular scale CMB measurements — at 30 GHz for a modest spectral index
(S ∝ ν −1 ) the radio source power is a factor 25 times stronger than at 150 GHz. To estimate
the CBI radio source foreground contribution, Mason et al. (2009) used the 31 GHz Green
Bank Telescope (GBT) to observe 3165 sources from the 1.4 GHz NRAO VLA Sky Survey
(Condon et al., 1998, NVSS) located in the CBI fields. Yet, if there exists a population of
sources with an “inverted” spectral index (S ∝ ν +1 ) — so that they are below the NVSS
detection threshold at 1.4 GHz but significant at 30 GHz — then this analysis will be blind
to those sources; precisely what Sharp et al. (2009) claim is suspect. SZA mitigates this
effect in their own analysis via an 8 GHz survey closer to their 30 GHz observing frequency
and by identifying sources directly in their maps with higher resolution observations of their
fields using the SZA long-baselines.
Even so, the inverted sources are only a fractional correction to the estimated CBI radio
source bandpowers at best and cannot account for the excess, which is a factor 6 times greater
than the residual radio source power. Still, this serves to illustrate the complexity of this
measurement at 30 GHz due to foregrounds. While the frequency dependence of the SZE
makes measurements at 100 and 150 GHz intrinsically less sensitive than those at 30 GHz,
they are also less prone to contamination by radio sources. This is clear from Figure 5.11 in
Section 5.1 — the radio source contribution to the QUaD bandpowers is insignificant. The
QUaD results are an important complement and check on these other experiments.
210
CHAPTER 7
CONCLUSIONS
The QUaD telescope, a bolometric polarimeter located at the geographic South Pole, was
used to map the cosmic microwave background (CMB) temperature anisotropy at 100 and
150 GHz with a resolution of approximately 5 and 3.5 arcminutes respectively. An incident
sky-signal-only power spectrum was estimated from the resulting maps using simulations of
the telescope and observations to correct for noise and systematic filtering effects. Significant
power was detected on scales ranging from 200 < ` < 3000; the results in this thesis focused
specific attention on high-` measurements at multipoles of 2000 < ` < 3000.
Here the CMB acoustic oscillation paradigm predicts that the primary anisotropy from
the surface of last scattering is exponentially damped by photon diffusion and thus secondary
anisotropy from intervening large-scale structure should dominate the power spectrum. The
standard ΛCDM cosmological model makes precise predictions for the shape of the primary
anisotropy at these scales, which are constrained by the WMAP 5 yr observations at larger
angular scales. Whereas the secondary anisotropy is less well-determined as it depends on
non-linear gravitational collapse after recombination and in the case of the thermal SunyaevZel’dovich effect (SZE), on physically complex hydrodynamical processes. Moreover, the
high-` power is actually dominated by extragalactic foreground radio point sources, which
must be removed at high significance then carefully accounted for at lower fluxes where they
are vastly more numerous and evenly distributed on the sky.
The QUaD high-` TT spectrum results are broadly consistent with the ΛCDM expectation from primary CMB alone, after masking the brightest > 5σ point sources in the maps
only. The potential contamination from residual un-masked radio sources was estimated
from simulations using effectively power-law radio source flux distributions, normalized by
the > 5σ sources and extrapolated to low flux with the de Zotti model (motivated by detailed
211
astrophysics). Applying a source correction based on these simulations marginally improved
the already good agreement with the ΛCDM expectation — as measured by χ2 — at both
frequencies.
Though there is no evidence for the detection of secondary anisotropies in the QUaD
results presented here, they may still be used to constrain the possible level of such a signal.
A standard analytically predicted template for the SZE power spectrum calibrated to a
σ8 = 0.8 cosmology was fit to the bandpowers at both frequencies simultaneously resulting in
a best-fit amplitude of ASZ = 1.2+1.2
−1.1 . The dependance of the SZE on large-scale structure
formation results in an exceptionally strong dependance on the amplitude of initial density
perturbations, scaling as σ87 , which can be exploited to place strong constraints on this
parameter — resulting in σ8 . 0.89, though important caveats neglected in the analysis
presented in this thesis may degrade such a constraint.
These results are both consistent with the WMAP 5 yr cosmology but also with other
measurements of σ8 from independent cosmological probes. They are also broadly consistent
with CMB experiments at large, with perhaps the notable exception of the CBI experiment
(Sievers et al., 2009). Yet neither QUaD nor any other experiment can confidently claim
to have measured the signatures of a thermal SZE secondary anisotropy power spectrum
to date. However clouded the the current measurements of the CMB damping tail may be
— muddled by contamination from foregrounds and the limiting size and resolution of the
telescopes — currently operating CMB experiments with arcminute-resolution such as the
Atacama Cosmology Telescope (ACT Kosowsky, 2003) and the South Pole Telescope (SPT
Padin et al., 2008) should prove to resolve the situation near future.
212
REFERENCES
Ade, P. et al. 2008, ApJ, 674, 22
Bennett, C. L. et al. 2003, ApJS, 148, 1
Bond, J. R. et al. 2005, ApJ, 626, 12
Bond, J. R., Jaffe, A. H., & Knox, L. 2000, ApJ, 533, 19
Bowden, M. 2004, PhD thesis, The University of Wales
Bowden, M. et al. 2004, MNRAS, 349, 321
Brown, M. L. et al. 2009, ArXiv e-prints
Carlstrom, J. E., Holder, G. P., & Reese, E. D. 2002, ARA&A, 40, 643
Carroll, S. 2004, Spacetime and Geometry (San Francisco, California: Addison Wesley)
Castro, P. G. et al. 2009, ArXiv e-prints
Chandrasekhar, S. 1960, Radiative Transfer (New York: Dover Publications)
Chiang, H. C. et al. 2009, ArXiv e-prints
Church, S. et al. 2003, New Astronomy Review, 47, 1083
Coble, K. et al. 2003, ArXiv Astrophysics e-prints
—. 2007, AJ, 134, 897
Condon, J. J., Cotton, W. D., Greisen, E. W., Yin, Q. F., Perley, R. A., Taylor, G. B., &
Broderick, J. J. 1998, AJ, 115, 1693
da Silva, A. C., Barbosa, D., Liddle, A. R., & Thomas, P. A. 2000, MNRAS, 317, 37
de Zotti, G., Ricci, R., Mesa, D., Silva, L., Mazzotta, P., Toffolatti, L., & González-Nuevo,
J. 2005, A&A, 431, 893
Dicke, R. H., Peebles, P. J. E., Roll, P. G., & Wilkinson, D. T. 1965, ApJ, 142, 414
Dodelson, S. 2003, Modern Cosmology (San Diego, California: Academic Press)
Dunkley, J. et al. 2009, ApJS, 180, 306
Farese, P. C. et al. 2004, ApJ, 610, 625
Finkbeiner, D. P., Davis, M., & Schlegel, D. J. 1999, ApJ, 524, 867
213
Friedman, R. B. et al. 2009, ApJ, 700, L187
Gehrels, N. 1986, ApJ, 303, 336
González-Nuevo, J., Massardi, M., Argüeso, F., Herranz, D., Toffolatti, L., Sanz, J. L.,
López-Caniego, M., & de Zotti, G. 2008, MNRAS, 384, 711
Górski, K. M., Hivon, E., Banday, A. J., Wandelt, B. D., Hansen, F. K., Reinecke, M., &
Bartelmann, M. 2005, ApJ, 622, 759
Gregory, P. C., Vavasour, J. D., Scott, W. K., & Condon, J. J. 1994, ApJS, 90, 173
Herschel, W. 1785, Philosophical Transactions of the Royal Society of London, 75, 213
Hinderks, J. R. et al. 2009, ApJ, 692, 1221
Hinderks, James, R. 2005, PhD thesis, Stanford University
Hinshaw, G. et al. 2009, ApJS, 180, 225
Hivon, E., Górski, K. M., Netterfield, C. B., Crill, B. P., Prunet, S., & Hansen, F. 2002,
ApJ, 567, 2
Holder, G. P., McCarthy, I. G., & Babul, A. 2007, MNRAS, 382, 1697
Hu, W., & Dodelson, S. 2002, ARA&A, 40, 171
Hu, W., & White, M. 1997, New Astronomy, 2, 323
—. 2004, SciAm, 290, 44
Hubble, E. 1929, Proceedings of the National Academy of Sciences of the United States of
America, 15, 168
Jenkins, A., Frenk, C. S., White, S. D. M., Colberg, J. M., Cole, S., Evrard, A. E., Couchman,
H. M. P., & Yoshida, N. 2001, MNRAS, 321, 372
Kamionkowski, M., Kosowsky, A., & Stebbins, A. 1997, Phys. Rev. D, 55, 7368
Kolb, R. 1996, Blind Watchers of the Sky (Cambridge, Massachusetts: Persues)
Komatsu, E., & Seljak, U. 2002, MNRAS, 336, 1256
Kosowsky, A. 1999, New Astronomy Review, 43, 157
—. 2003, New Astronomy Review, 47, 939
Kovac, J. M., & Barkats, D. 2007, ArXiv e-prints
214
Kovac, J. M., Leitch, E. M., Pryke, C., Carlstrom, J. E., Halverson, N. W., & Holzapfel,
W. L. 2002, Nature, 420, 772
Kuo, C. L. et al. 2004, ApJ, 600, 32
Lane, A. P. 1998, in Astronomical Society of the Pacific Conference Series, Vol. 141, Astrophysics From Antarctica, ed. G. Novak & R. Landsberg, 289–+
Lay, O. P., & Halverson, N. W. 2000, ApJ, 543, 787
Lewis, A., Challinor, A., & Lasenby, A. 2000, Astrophys. J., 538, 473
Masi, S. et al. 2006, A&A, 458, 687
Mason, B. S., Weintraub, L. C., Sievers, J. L., Bond, J. R., Myers, S. T., Pearson, T. J.,
Readhead, A. C. S., & Shepherd, M. C. 2009, ArXiv e-prints
Mauch, T., Murphy, T., Buttery, H. J., Curran, J., Hunstead, R. W., Piestrzynski, B.,
Robertson, J. G., & Sadler, E. M. 2003, MNRAS, 342, 1117
Monaco, P., Theuns, T., & Taffoni, G. 2002, MNRAS, 331, 587
Montroy, T. E. et al. 2006, ApJ, 647, 813
Navarro, J. F., Frenk, C. S., & White, S. D. M. 1997, ApJ, 490, 493
Nolta, M. R. et al. 2009, ApJS, 180, 296
O’Sullivan, C. et al. 2008, Infrared Physics and Technology, 51, 277
Padin, S. et al. 2008, Appl. Opt., 47, 4418
Peacock, J. A. 1999, Cosmological Physics, 1st edn. (Cambridge, United Kingdom: Cambridge University Press), 291–292
Penzias, A. A., & Wilson, R. W. 1965, ApJ, 142, 419
Press, W. H., & Schechter, P. 1974, ApJ, 187, 425
Pryke, C. et al. 2009, ApJ, 692, 1247
Readhead, A. C. S. et al. 2004, ApJ, 609, 498
Refregier, A., Komatsu, E., Spergel, D. N., & Pen, U.-L. 2000, Phys. Rev. D, 61, 123001
Refregier, A., & Teyssier, R. 2000, ArXiv Astrophysics e-prints
Reichardt, C. L. et al. 2009, ApJ, 694, 1200
215
Rybicki, G. R., & Lightman, A. P. 1979, Radiative Processes in Astrophysics (New York:
John Wiley & Sons)
Sachs, R. K., & Wolfe, A. M. 1967, ApJ, 147, 73
Schulz, A. E., & White, M. 2003, ApJ, 586, 723
Seljak, U., Burwell, J., & Pen, U.-L. 2001, Phys. Rev. D, 63, 063001
Sharp, M. K. et al. 2009, ArXiv e-prints
Shaw, L. D., Holder, G. P., & Bode, P. 2008, ApJ, 686, 206
Sievers, J. L. et al. 2009, ArXiv e-prints
Silk, J. 1968, ApJ, 151, 459
Springel, V., Frenk, C. S., & White, S. D. M. 2006, Nature, 440, 1137
Springel, V., White, M., & Hernquist, L. 2001, ApJ, 549, 681
Springel, V. et al. 2005, Nature, 435, 629
Stompor, R. et al. 2002, Phys. Rev. D, 65, 022003
Sunyaev, R. A., & Zeldovich, Y. B. 1972, Comments on Astrophysics and Space Physics, 4,
173
Takahashi, Y. D. et al. 2009, ArXiv e-prints
Tegmark, M. 1997, Phys. Rev. D, 56, 4514
Vikhlinin, A. et al. 2009, ApJ, 692, 1060
White, M., Hernquist, L., & Springel, V. 2002, ApJ, 579, 16
Wright, E. L. et al. 2009, ApJS, 180, 283
Wu, E. Y. S. et al. 2009, Physical Review Letters, 102, 161302
Zemcov, Michael, B. 2006, PhD thesis, Cardiff University
Zentner, A. R. 2007, International Journal of Modern Physics D, 16, 763
Zhang, P., Pen, U.-L., & Wang, B. 2002, ApJ, 577, 555
216
Документ
Категория
Без категории
Просмотров
0
Размер файла
44 277 Кб
Теги
sdewsdweddes
1/--страниц
Пожаловаться на содержимое документа