a1659 sband hi ch measurements ons only
aug03 , 2003
Intro:
A1659 used standard ons to track positions in the sky
for up to 100 minutes per src. The pattern was on (300 or 900 secs) followed
by a 10 second cal on,off. The bandwidth was 750 khz with 1024 channels.
There were over 300 scans with more than 60 separate sources. Eventually
a map was to be made of the positions. The data was taken over many days
with the same source in more than one file.
Analysis:
The on followed by calon/caloff does not have an off
position for bandpass correction/standing wave cancellation. Since the
bandwidth is narrow and the lines are narrow, you can probably get away
with this. You want to convert the data to kelvins (this is galactic
work) and then average all the scans for each source. The processing below
does not worry about rfi or bad records. At 3.3 ghz this works...The processing
strategy is:
-
For every correlator datafile found:
-
scan the file finding the on scans
-
input each onscan, average, and then scale to kelvins
-
place all of the averaged scans in an array, when done with the file, store
this array in an idl save file for this day.
-
After creating the daily save files, input all of the daily save files.
Average together scans of the same source and then baseline them.
-
input all of the daily save files into a single array
-
Find all of the unique sources. For each source
-
loop over all of the scans of this source
-
average the scans together weighting by 1/Tsys.
-
baseline each averaged file
-
make your map.
The routines used for this processing are:
-
corcalib()
will average the records in 1 on scan and then scale it to kelvins using
the cal scans that follow.
-
pfcalib()
will process all of the on scans followed by cals in a file and store them
in a idl save file. The only limitation is that the data must have
the same dimensions (number of sbc, number of lags) so that they can all
be placed in an array. Any data that doesn't match the first data found
is ignored (with a message output). It does this by scanning the file and
the calling corcalib for each on scan found.
-
corsavbysrc()
will input a set of idl save files, and combine all of the data into a
single array. It has options to break up the data into individual source
names. In this case with over 60 separate sources it is easier to work
with the data as 1 large array.
-
coraccum()
this accumulates scans with optional weights. The data in the {corget}
structure is stored as the accumulated data values and accumulated weights.
coravg() can be used to scale the data by the weights when the acucmulating
is done.
-
corblauto()
is used to automatically baseline correct the data. Cormask() can be used
to create an initial mask that will not include the spectral lines in the
base lining.
An example of the code:
filelist.dat
An array of filenames determines which cor files to process:
procdays.pro
For each data file, pfcalib() is called to process all of the scans in
the file. It will average each scan and then scale to kelvins. It will
not average together any scans. It ouputs an idl save file for each day.
avgthenbaseline.pro
This file will input all the daily save files, combine them into 1 large
array (b_all[]). The data is then averaged over all the scans of the same
source (checking that the frequencies and velocities are the same) resulting
in b_allavg[]. This data array is then baslined by source storing
the baselined data in bbl[] while the actual baselines used are stored
in blAr[]. The code at the end is various ways to look at the data.
What we end up with:
bbl[] will hold the baslined data. 1 entry per source,
blAr[] holds the baselines. usrcnm[] holds the unique source names (which
are also in the headers of bbl,blar).
Things that still need to be done:
-
Some of this data was taken with the wrong velocity coordinate system.
These scans need to be corrected.
-
Some of the baselines don't look too good. Some of the integrations were
for 900 seconds. It might be worthwhile breaking these scans into 3 300
second records, averaging and then base lining them to see if the
baselines improved.
processing: usr/a1659/procdays.pro, doit.pro
home_~phil