Studying Diffuse X-ray Emission with ACIS
Pat Broos
Leisa Townsley

diffuse_procedure.txt $Revision: 1304 $ $Date: 2022-02-14 13:36:12 -0700 (Mon, 14 Feb 2022) $

This is a streamlined version of recipe.txt, giving the commands actually used to perform AE's diffuse analysis (Section VIII).

Tested on CIAO 4.12 and CALDB 4.9.4.

THIS PROCEDURE MUST BE EXECUTED IN A csh OR tcsh SHELL!


Use the local path to access your data.
On Hiawatha, 
  cd /Volumes/hiawatha1/targets/<target>/data/extract/

The starting point for AE diffuse analysis is the final output of the point source standard extraction process shown in photometry_procedure.txt, where we have the best backgrounds and source models.


*****
BEWARE OF OVERFLOWING IDL'S COMMAND LINE BUFFER!
If characters are sent to the IDL prompt too quickly then its command line buffer will overflow and drop some of those characters.  This problem can easily occur if you paste a large block of IDL commands into terminal window where IDL is running.

In this recipe, blocks of IDL commands are sent to IDL processes in two distinct ways:

1. When the same commands need to be sent to several IDL sessions (e.g. to perform a task for each ObsID), we use features of the "screen" utility to send those "paste" to several screen windows.  The "screen" session is configured to slow down those "paste" operations, to reduce the chance of buffer overflow.

2. When commands need to be sent to just one IDL session, this recipe directs you to directly "paste" those commands in the appropriate screen window, using your computer's normal cut-and-paste operations.  In this case, the best way to prevent overflow of the IDL command buffer is to configure your terminal application's "paste" command to slow down the rate that characters are delivered.  

For example the Mac terminal application called iTerm2 has a menu item called "Paste Slowly" for just this situation.  If you have iTerm2, then the most convenient approach is to navigate to iTerm2->Preferences->Keys->Global Shortcut Keys.  There, you can configure your normal "paste" key (e.g. CMD-V) to the menu item "Paste Slowly".

Other terminal applications may have similar options to slow down pasting.  If that is not available, you may find that you have to cut-and-paste IDL commands in smaller chunks.
****
                     

#################################################################################
############# REVIEW THE CALIBRATION STATUS OF YOUR DATA ##################
#################################################################################

Flaws in the calibration of the OBF will affect the diffuse images you're about to build, and will affect the calibration of diffuse sources you may later extract.

DO NOT PROCEED UNTIL YOU CONSIDER WHETHER YOUR DATA PRODUCTS ARE ADEQUATELY CALIBRATED!
See calibration_patch_procedure.txt.



#################################################################################
############# PREPARE FOR POINT SOURCE MASKING ACTIVITIES      ##################
#################################################################################

  touch ../diffuse_sources.noindex/diffuse_notes.txt
  tw    ../diffuse_sources.noindex/diffuse_notes.txt
=============================================================================
Record in your notes (diffuse_notes.txt) the version of this procedure that you are executing, shown below:
  diffuse_procedure.txt $Revision: 1304 $ $Date: 2022-02-14 13:36:12 -0700 (Mon, 14 Feb 2022) $
=============================================================================


---------------------------------------------------------------------------
SCREEN SESSION

Since adjustments to point source masking may be required, we need to retain the ObsID-based screen session used in the validation/photometry procedures.

If you no longer have that screen session, then remake it with the commands below.

  cd <target>/data/extract/point_sources.noindex/
  setenv TARGET <target>

  'rm' CURRENT_PASS; ln -s pass_diff1 CURRENT_PASS
  setenv               PASS `readlink CURRENT_PASS`

  setenv OBS_LIST ""
  foreach dir (`'ls' -d ../obs[0-9]* | sort --field-separator=_ --key=2r --key=1.7g`)
    if (! -d $dir) continue
    set obs=`basename $dir | sed -e "s/obs//"`
    setenv OBS_LIST "$OBS_LIST $obs"
    end
  echo $OBS_LIST  

  setenv SCREEN_ARCH ''
  if (`uname -m` == arm64)  setenv SCREEN_ARCH 'arch -arm64 tcsh'

  setenv SCREEN_NAME      AE_point_${TARGET}
  setenv PROMPT              '%m::%${TARGET} (%$PASS) %% '
  screen -S $SCREEN_NAME -d -m -t "${TARGET}"  $SCREEN_ARCH 
  screen -S $SCREEN_NAME -X defslowpaste 10

  screen -S $SCREEN_NAME -X setenv PROMPT "%m::${TARGET} (window1) %% "
  screen -S $SCREEN_NAME -X screen -t window1  $SCREEN_ARCH
  screen -S $SCREEN_NAME -X setenv PROMPT "%m::${TARGET} (window2) %% "
  screen -S $SCREEN_NAME -X screen -t window2  $SCREEN_ARCH
  screen -S $SCREEN_NAME -X setenv PROMPT "%m::${TARGET} (window3) %% "
  screen -S $SCREEN_NAME -X screen -t window3  $SCREEN_ARCH
  sleep 1
  foreach obs ($OBS_LIST)
    screen -S $SCREEN_NAME -X setenv OBS     ${obs}
    screen -S $SCREEN_NAME -X setenv PROMPT "${obs} %% "
    screen -S $SCREEN_NAME -X screen -t   obs${obs}  $SCREEN_ARCH
    sleep 0.5
  end
  screen -S $SCREEN_NAME -r
  sleep 4
  screen -S $SCREEN_NAME -X select 0
  screen -S $SCREEN_NAME -X at \# slowpaste 10
  screen -S $SCREEN_NAME -X at \# stuff 'unset correct^M'
  screen -S $SCREEN_NAME -p 0 -X stuff 'echo $OBS_LIST | wc -w^M'
  screen -S $SCREEN_NAME -X windowlist -b

Double-check that you have a screen window for every ObsID.  There should also be screen windows named 'window1', 'window2', 'window3', plus the usual top-level $TARGET window #0.  Use ^a" to see the list of screen windows.  The last window index should be THREE LARGER than the number of ObsIDs (which was printed by the "echo" command above).  



#################################################################################
############# REVIEW MASKING RESULTS (from photometry_procedure.txt) ##################
#################################################################################
Back in the photometry recipe, we performed a preliminary masking run.  We now review those results.

---------------------------------------------------------------------
Review warning/error messages in log files.

From screen window #0 (named $TARGET), run:

  'rm' CURRENT_PASS; ln -s pass_diff1 CURRENT_PASS
  setenv               PASS `readlink CURRENT_PASS`

  egrep -i "WARNING|ERROR|DEBUG|halted|Stop" ae_extraction_and_masking.*.log | egrep -v "DISPLAY variable|LS_COLORS|no in-band data|No HDUNAME|Program caused arithmetic error|error=|ARF was computed to be zero|has no rows|spans multiple|not observed|spectra will be reused|sources were not extracted|sources not in this observation|ran out of candidate|one emap pixel|(GRATTYPE|CCD_ID)' in supplied header|Background spectrum|BACKGROUND_IMBALANCE_THRESHOLD|Zero length|cannot intersect|finite width|subset of full catalog|saved from previous session|adopted region|StopTime|severely crowded|brute force|single extraction has excessive OVERLAP|reserve region|reached hard limits|changed the standard deviations|max kernel radius|axBary: HDU|-debug" |more

  grep -c 'Spawning ds9 to perform coordinate conversions on EXTRA_MASKFILE' ae_extraction_and_masking.*.log

This warning:  "DMCOPY (CIAO 4.8): [ftColRead]: FITS error 308 bad first element number in dataset..." happens when extra_maskfile.reg doesn't intersect the ObsID in question.  Pat has an open ticket with Helpdesk on this.  Ignore it for now, but this is why we're not using CIAO 4.9 -- this situation causes dmcopy to HALT in that version of CIAO.

---------------------------------------------------------------------------
Visually review the masked data products.

From screen window #0 (named $TARGET), run:

  'rm' CURRENT_PASS; ln -s pass_diff1 CURRENT_PASS
  setenv               PASS `readlink CURRENT_PASS`
  set temp_dir=`mktemp -d -t masking`
  foreach obs ($OBS_LIST)
    dmcopy "../obs${obs}/background.evt[energy=500:7000][cols sky]" $temp_dir/obs${obs}.evt
    grep polygon ../obs${obs}/extract.reg                          >$temp_dir/obs${obs}.reg
    
    set ds9_cmd="ds9 -title '$TARGET masked data (obs${obs})' -tile -lock frame wcs ../obs${obs}/background.emap -linear  -bin factor 4 $temp_dir/obs${obs}.evt -linear -frame 1 -zoom to fit -frame 2 -zoom to 1 -region load all $temp_dir/obs${obs}.reg -region load all ../diffuse_sources.noindex/extra_maskfile.reg"
   
    echo "$ds9_cmd&" | csh -s 
  end


******
LOOK at the masked event list (background.evt) and emap (background.emap) that are brought up in each ds9 above!    

The "ae_better_masking" tool does "adaptive" masking, so you might see events (or emap area) left inside the 90% aperture masks (green polygons) for faint sources, especially far off-axis.  That's ok.

When a source is very bright, its wings will deserve masking all the way out to the edge of the PSF footprint and you will see a square mask on the emap.  In such cases, there are presumably pixels beyond the PSF footprint that would have been masked if the PSF footprint was larger.  AE increases the mask for such sources, but you should still look closely at the masked event data to see if there are remaining wings that are bright enough to damage your diffuse analysis.  Now that ae_better_masking is building models of readout streaks, we hope that drawing masks around streaks will no longer be necessary.

You may also notice crowded regions that probably have more unresolved point source light than diffuse emission.
******


---------------------------------------------------------------------------
(OPTIONAL) Apply Additional Masking to ALL ObsIDs

IF YOU FIND THAT THE MASKED DATA CONTAINS LIGHT THAT SHOULD NOT BE CALLED "DIFFUSE EMISSION" then revise the hand-drawn regions in ../diffuse_sources.noindex/extra_maskfile.reg, save in CELESTIAL COORDINATES USING SEXAGESIMAL FORMAT, and repeat the ae_better_masking runs using the procedure below.  


From screen window #0 (named $TARGET), run:

  touch SKIP_RESIDUALS
  cat > block.txt << \STOP
  nice idl -queue |& tee -a ae_better_masking.${OBS}.log 
    file_delete, "waiting_for_"+getenv("OBS"), /ALLOW_NONEXISTENT 
\STOP
  screen -S $SCREEN_NAME                -X readreg A block.txt


  cat > block.txt << \STOP
    .run ae 
    .run 
    par = ae_get_target_parameters()
    obsname = getenv("OBS") 
    obsdir  = "../obs" + obsname + "/" 
    semaphore = wait_for_cpu(LOW_PRIORITY=strmatch(obsdir,'*_BI'))

    ae_better_masking, obsname, EVTFILE_BASENAME="validation.evt", /REUSE_MODELS, /SKIP_EXTRACT_BACKGROUNDS, SKIP_RESIDUALS=par.skip_residuals, EXTRA_MASKFILE="../diffuse_sources.noindex/extra_maskfile.reg" 

    file_move, obsdir+"ae_lock", obsdir+"ae_finished", /OVERWRITE 
    exit, /NO_CONFIRM 
    end 
\STOP
  screen -S $SCREEN_NAME                -X readreg B block.txt

  
  foreach obs ($OBS_LIST)
    touch waiting_for_${obs}
    screen -S $SCREEN_NAME -p obs${obs} -X   paste A
    sleep 1
    while ( -e waiting_for_${obs} )
      echo "Launching IDL ..."
      sleep 1
    end
    printf "\nLaunching ObsID ${obs}\n"
    screen -S $SCREEN_NAME -p obs${obs} -X   paste B
    sleep 20
  end
    
A ds9 session is briefly run for each ObsID (to change the coordinate system of extra_maskfile.reg).  When masking is completed, you are prompted to review the results in a 2-frame ds9 session.



As soon as the code above starts launching ObsIDs, start monitoring the progress of the AE runs by executing the following script from screen window #1.  If you see that no runs are making progress, then look at each screen window to see what has happened.  

  touch .marker
  while 1
    sleep 60
    printf '\nThe following AE runs are still making progress:\n'
    printf '  %s\n' `find . -maxdepth 1 -newer .marker -name "*.log"`
    touch .marker
    if (! `ls ../obs[0-9]*/ae_lock | wc -l`) break
  end

  egrep -i "WARNING|ERROR|DEBUG|halted|Stop" ae_better_masking.*.log | egrep -v "DISPLAY variable|LS_COLORS|arithmetic error|no in-band|error=|changed the standard deviations|not in this observation" |more

  grep -c 'Spawning ds9 to perform coordinate conversions on EXTRA_MASKFILE' ae_better_masking.*.log

  
If you see that no runs are making progress, then look for failure messages with the following:
  egrep -i "stop|halted" ae_better_masking.*.log
and/or
  tail -1 ae_better_masking.* | egrep -v 'finished|minutes|Waiting|Source:|arithmetic'



---------------------------------------------------------------------------
(OPTIONAL) Apply Additional Masking to SPECIFIC ObsIDs

Rarely, you will find that the required "extra masking" must be ObsID-specific.  For example, the extent of an unwanted astrophysical source may vary among ObsIDs.  Or, stray light artifacts tend to occur on individual ObsIDs.

In this case you must draw whatever ObsID-specific "extra mask region" files are needed, in CELESTIAL COORDINATES USING SEXAGESIMAL FORMAT.  Save this as, e.g., diffuse_sources.noindex/extra_maskfile_obs6416_BI.reg (where, in this example, "6416_BI" is $OBS).  Then, you must run ae_better_masking BY HAND (rather than via the script above) in the appropriate $OBS's screen window, supplying the appropriate EXTRA_MASKFILE through the $OBS construct:

  nice idl -queue |& tee -a ae_better_masking.${OBS}.log 
    .run ae 
    .run 
    par = ae_get_target_parameters()
    obsname = getenv("OBS") 
    obsdir  = "../obs" + obsname + "/" 

    ae_better_masking, obsname, EVTFILE_BASENAME="validation.evt", /REUSE_MODELS, /SKIP_EXTRACT_BACKGROUNDS, SKIP_RESIDUALS=par.skip_residuals, $
    EXTRA_MASKFILE="../diffuse_sources.noindex/extra_maskfile_obs${OBS}.reg

    file_move, obsdir+"ae_lock", obsdir+"ae_finished", /OVERWRITE 
    exit, /NO_CONFIRM 
    end 



#################################################################################
WAIT HERE UNTIL THE EXPOSURE MAP JOBS HAVE FINISHED.
#################################################################################

Back in photometry_procedure.txt you started some exposure maps runs in a 'diffuse_emaps' screen session.  Find window #0 of that screen session and monitor the runs until they finish:

  touch .marker
  while 1
    if (! `ls pointing*/obsid*/ae_lock | wc -l`) break
    sleep 120
    printf '\nThe following exposure maps runs are still making progress:\n'
    printf '  %s\n' `find pointing*/obsid*/ae_make_emap.log -newer .marker`
    touch .marker
  end
 

  egrep -i "WARNING|ERROR|INFORMATION|DEBUG|halted|Stop|BADPIX_FILE" `ls -1tr pointing*/obsid*/ae_make_emap.log` | egrep -v "stowed_error|Merged|Program caused arithmetic error|Using previously computed|pbkfile parameter is no longer used"


After checking for error messages, CLOSE THE EMAP-BUILDING SCREEN SESSION.



#################################################################################
############# SETUP SYMLINKS TO DATA PRODUCTS FOR DIFFUSE ANALYSIS ##################
#################################################################################
We exclude gratings observations from diffuse analysis because most of the diffuse events are dispersed all over the field.


{We generated stowed data products in reduction_procedure.txt, which is not publicly released.  External users of AE should read Appendix A for suggestions about  generating "stowed" event data products.}


---------------------------------------------------------------------------
From screen window #0, 

  set extract_dir=`echo $cwd | sed -E "s|(.*extract)(.*)|\1|"`
  cd $extract_dir ; pwd
  
 foreach obs ($OBS_LIST)
 echo ${obs} 
 pushd obs${obs}
    'rm' -f diffuse.emap
    'rm' -f diffuse.evt
    if (`dmkeypar validation.evt GRATING echo=yes` == 'NONE') ln -sv background.emap diffuse.emap
    if (`dmkeypar validation.evt GRATING echo=yes` == 'NONE') ln -sv background.evt diffuse.evt
 popd
 end

 idl -queue 
    .run ae
    .run
    run_command, file_which('setup_AE_for_obsid_data.csh')+' "../pointing_*/obsid_*" AE'
    exit, /NO_CONFIRM
    end


You should see messages showing links being made to emaps in various energy bands, e.g.
  obs.vhard.emap    -> ../../pointing_4/obsid_9113/AE/obs.FI.vhard.emap
  obs.hard.emap     -> ../../pointing_4/obsid_9113/AE/obs.FI.hard.emap
  obs.soft_med.emap -> ../../pointing_4/obsid_9113/AE/obs.FI.soft_med.emap
  obs.soft_low.emap -> ../../pointing_4/obsid_9113/AE/obs.FI.soft_low.emap
  obs.1300eV.emap   -> ../../pointing_4/obsid_9113/AE/obs.FI.1300eV.emap
  obs.1700eV.emap   -> ../../pointing_4/obsid_9113/AE/obs.FI.1700eV.emap
Messages about "removing broken symlink" are generally benign.
    
    
    
****If you're not planning to do diffuse spectral analysis, you can kill your "AE_point_<target>" screen session now!!***
 


#################################################################################
############# SMOOTH THE DIFFUSE DATA (taken from tara_smooth.txt) ##################
#################################################################################

MAKE A DIRECTORY AND A SCREEN SESSION FOR THE ADAPTIVE SMOOTHING WORK
====================================================

In any iTerm that is not running a screen session:

  cd <target>/data/extract/
  setenv TARGET <target>

  mkdir adaptive_smoothing 
  cd    adaptive_smoothing
  ~/bin/acis_lkt4 .
  ln -s  ../../tangentplane_reference.evt .
  

  setenv SCREEN_ARCH ''
  if (`uname -m` == arm64)  setenv SCREEN_ARCH 'arch -arm64 tcsh'

  setenv SCREEN_NAME adaptive_smoothing_${TARGET}
  setenv PROMPT                   '%m::%${TARGET} (adaptive_smoothing) %% '
  screen -S $SCREEN_NAME -d -m -t       ${TARGET}  $SCREEN_ARCH
  screen -S $SCREEN_NAME -X defslowpaste 10

  screen -S $SCREEN_NAME -X setenv PROMPT "%m::${TARGET} (build_scene) %% "
  screen -S $SCREEN_NAME -X screen -t build_scene  $SCREEN_ARCH
  screen -S $SCREEN_NAME -X setenv PROMPT "%m::${TARGET} (build_scene) %% "
  screen -S $SCREEN_NAME -X screen -t build_scene  $SCREEN_ARCH
  screen -S $SCREEN_NAME -X setenv PROMPT "%m::${TARGET} (build_scene) %% "
  screen -S $SCREEN_NAME -X screen -t build_scene  $SCREEN_ARCH
  screen -S $SCREEN_NAME -X setenv PROMPT "%m::${TARGET} (tara_smooth) %% "
  screen -S $SCREEN_NAME -X screen -t tara_smooth  $SCREEN_ARCH

  screen -S $SCREEN_NAME -X chdir   $PWD
  screen -S $SCREEN_NAME -r
  sleep 4
  screen -S $SCREEN_NAME -X select 0
  screen -S $SCREEN_NAME -X at \# slowpaste 10
  screen -S $SCREEN_NAME -X at \# stuff 'unset correct^M'
  screen -S $SCREEN_NAME -X at \# stuff 'pwd^M'
  screen -S $SCREEN_NAME -X windowlist -b



DEFINE A "SCENE" ON WHICH IMAGES WILL BE CONSTRUCTED, AND THEN SMOOTHED
===========================================================================================
The call below builds a template image (defining the scene), a mask image recording the field-of-view, and an unmasked full-band composite emap (which may be nice to have around). 

Edit the variable "DESIRED_IMAGE_PIXELS" at the start of the code below to set number of pixels used to cover the field-of-view, if you don't want the default.

From any "adaptive_smoothing" screen window, run:

  idl -queue |& tee -a build_scene.log
    file_delete, /ALLOW_NONEXISTENT, 'build_composite.sav'
    .run
    desired_image_pixels = 1000L^2
    obs_pattern          = '../obs[0-9]*'
    scene_name           = 'fullfield'

    template_row =  {image_spec, name:'', emap_basename:'', energy_filter:''}
    image_spec   = [$
                    ; Full band, used only to define kernels for narrow bands.
                    {image_spec,  'full_band'    , 'full'    ,  '500:7000' },$
                    ; Narrow bands that use kernels defined by wide bands.
                    {image_spec,  '500:900_band' , 'soft_med',   '500:900' },$
                    {image_spec,  '900:1100_band', 'soft'    ,   '900:1100' },$
                    {image_spec, '1100:1500_band', '1300eV'  ,  '1100:1500' },$
                    {image_spec, '1500:2000_band', '1700eV'  ,  '1500:2000' },$
                    {image_spec, '2000:4000_band', 'hard'    ,  '2000:4000' },$
                    {image_spec,     'vhard_band', 'vhard'   ,  '4000:7000' },$
                    ; Other wide bands used only to define kernels for narrow bands.
                    {image_spec,  'soft_band'    , 'soft'    ,  '500:2000' },$
                    {image_spec,  'hard_band'    , 'hard'    , '2000:7000' },$
                    ; Band used for scaling Stowed Background.
                    {image_spec,  'scale_band'   , 'full'    , '9000:12000'} $
                    ]
                    ; T-ReX diffuse bands (independent kernels)
;                   {image_spec,  '500:700_band' , 'soft_low',  '500:700'  },$
;                   {image_spec,  '700:1100_band', 'soft_med',  '700:1100' },$
;                   {image_spec, '1100:2300_band', '1300eV'  , '1100:2300' } $
;                   ]
;                   ; Retired bands.
;                   {image_spec, '500:1000_band' , 'soft_med',  '500:1000' },$

    narrow_band_names = (image_spec.name)[1:6]

    image_spec.emap_basename =   'obs.'+image_spec.emap_basename+'.emap'
    image_spec.energy_filter ='energy='+image_spec.energy_filter

    merged_eventfile  = 'full_band/'+scene_name+'.diffuse.evt'

    obs_event_fn     = file_search(obs_pattern+'/validation.evt', COUNT=num_obs)
    if (num_obs EQ 0) then begin
      print, 'ERROR: no validation.evt files found.'
      retall
    endif

    obsdir           = file_dirname(obs_event_fn)
    diffuse_event_fn = obsdir+'/diffuse.evt'
     stowed_event_fn = obsdir+'/diffuse.bkg.evt'

    diffuse_info = file_info(diffuse_event_fn)
     stowed_info = file_info( stowed_event_fn )
    good_ind = where(diffuse_info.EXISTS AND stowed_info.EXISTS, good_count, COMPLEMENT=bad_ind, NCOMPLEMENT=bad_count)

    if (bad_count GT 0) then begin
      print, 'ERROR: The following ObsIDs are missing the files diffuse.evt and/or diffuse.bkg.evt:'
      forprint, obsdir, SUBSET=bad_ind
      print, 'Exit IDL or type .c to ignore those ObsIDs and continue.'
      stop
          obs_event_fn =     obs_event_fn[good_ind]
      diffuse_event_fn = diffuse_event_fn[good_ind]
       stowed_event_fn =  stowed_event_fn[good_ind]
    endif

    diffuse_info = file_info(diffuse_event_fn)
     stowed_info = file_info( stowed_event_fn )
    bad_ind = where(/NULL, diffuse_info.MTIME GE stowed_info.MTIME)

    if isa(bad_ind) then begin
      print, F='(%"\n\nERROR!  The following stowed event lists are OLDER than the corresponding ObsID data.\n")'
      forprint, SUBSET=bad_ind, stowed_event_fn
      print, 'Exit IDL or type .c to ignore this and continue.'
      stop
    endif ;


    ; Build a scene mask (EMAP_BASENAME supplied) from unmasked exposure maps.
    build_scene, OBS_EVENT_FN=obs_event_fn, EMAP_BASENAME='obs.emap', scene_name, /CREATE_TEMPLATE, /CREATE_IMAGE, IMAGE_FILTER_SPEC='energy=500:7000', IMAGE_NAME='full_band', DESIRED_IMAGE_PIXELS=desired_image_pixels

    file_copy, 'fullfield.mask', 'nominal_fullfield.mask', /OVERWRITE

    extra_maskfile = '../diffuse_sources.noindex/extra_maskfile.reg'

    if ~file_test(extra_maskfile) then $
      extra_maskfile = DIALOG_PICKFILE(TITLE='If desired, select a region file (sexagesimal format) that declares unobserved regions of the field, where no diffuse flux should be estimated.', FILTER='*.reg')

    if logical_true(extra_maskfile) && logical_true( file_lines(extra_maskfile) ) then begin
      run_command, string(extra_maskfile, $
        F="(%'dmcopy ""nominal_fullfield.mask[exclude sky=region(%s)]"" fullfield.mask clob+')")

      ; Verify that fullfield.mask has the correct dimensions, and display it.
      template = readfits('fullfield_template.img')
      mask     = readfits('fullfield.mask')
      if ~ARRAY_EQUAL(size(template, /DIMEN), size(mask, /DIMEN)) then begin
        print, extra_maskfile, F='(%"ERROR: emap and field mask (fullfield_template.img, fullfield.mask) must have the same dimensions.  Your file %s MUST BE IN CELESTIAL COORDINATES USING SEXAGESIMAL FORMAT.")'
        help, template, mask
        retall
      endif
    endif

    save, scene_name, image_spec, narrow_band_names, merged_eventfile, obs_event_fn, diffuse_event_fn, stowed_event_fn, FILE='build_composite.sav'
    exit, /NO_CONFIRM
    end


The dmcopy above declares that regions you previously explicitly masked via ../diffuse_sources.noindex/extra_maskfile.reg are 'unobserved'; flux will not be computed there.  You could instead define a different region file for that purpose, e.g. to avoid artifacts that can occur when flux is extrapolated into a masked area under certain conditions.

  dmcopy "nominal_fullfield.mask[exclude sky=region(no_smoothing.reg)]" fullfield.mask clob+

In the example above, no_smoothing.reg MUST BE IN CELESTIAL COORDINATES USING SEXAGESIMAL FORMAT.!!!!
 


(optional) Reduce or enlarge the resolution of the scene template
=================================================================================
If your observation is shallow, the default angular resolution of the scene built above (1000^2 pixels) may require very large smoothing kernels, which will require long run-times in the tara_smooth tool.

Conversely, if your observation is very deep, the default angular resolution may produce very small smoothing kernels, which introduce quantization artifacts.

You may change the number of pixels in the scene by changing the scalar "DESIRED_IMAGE_PIXELS" defined at the top of the code block above, e.g for shallow observations DESIRED_IMAGE_PIXELS=500L^2 will run faster, and for a deep or wide-field observation DESIRED_IMAGE_PIXELS=3000L^2 will give you more angular resolution.  One metric (after smoothing has been run) is to look at the smallest values in the radius image---values of zero or a few indicate that you need more resolution.  If all values are 50, then you're wasting computer time building a very smooth image.

Rectangular scenes are allowed.



(optional) Change the field of view of your scene template
=================================================================================
The build_scene call above defines a scene field-of-view that encompasses all the ACIS data.


If you want to include only certain ObsIDs, then change "obs_pattern" in the code block above to a string array of paths to each ObsID.


If you want to define an arbitrary rectangular field-of-view, then display the target-level event data and write down the PHYSICAL (aka "SKY") coordinates of the corners of your desired field-of-view. 

  ds9  ../target.evt &
  
Then, use dmcopy to build a "template image" with that field-of-view, e.g.

  dmcopy "../target.emap[x=1000:2000:1,y=500:3000:1]" custom_template.img clob+

Then, run the IDL code above with the following modifications:

    ; Name the scene to  match the prefix in the name of your custom template, "custom_template.img".
    scene_name = 'custom'
    
    ; USE the scene template you built by hand, rather than CREATE a default scene template.
    build_scene, OBS_EVENT_FN=obs_event_fn, EMAP_BASENAME='obs.emap', scene_name, CREATE_TEMPLATE=0, /CREATE_IMAGE, IMAGE_FILTER_SPEC='energy=500:7000', IMAGE_NAME='full_band'





REPROJECT ALL THE DATA PRODUCTS ONTO THE SCENE AND COMBINE TO CREATE IMAGES, ETC.
=================================================================================
Create target images, stowed data images, and emaps in several energy bands.

The IDL sessions below may be launched while the one above is still running.

From any "adaptive_smoothing" screen window, run:

  touch                build_composite_obs_lock
  idl -queue |& tee -a build_composite_obs.log
    .run ae
    .run
    wait_until_files_exist, 10, 'build_composite.sav'
    restore,                    'build_composite.sav'

    ; Remove one of the files to be built below, to prevent diffuse_smoothing_block1 from using stale files.
    file_delete, /ALLOW_NONEXISTENT, image_spec.name+'/'+scene_name+'.diffuse.emap'

    build_scene, OBS_EVENT_FN=diffuse_event_fn, EMAP_BASENAME=image_spec.emap_basename, MASK_BASENAME='background.emap', RESOLUTION=4, scene_name, SUFFIX='.diffuse', /CREATE_IMAGE, SHOW=0, IMAGE_FILTER_SPEC=image_spec.energy_filter, IMAGE_NAME=image_spec.name, MERGED_FILTERSPEC='energy=500:7000', MERGED_COLUMNLIST='det,sky,energy', MERGED_EVENTFILE=merged_eventfile

    cmd  = string(merged_eventfile, F="(%'ds9 -title ""${TARGET} masked events 500:700 eV, DET coordinates"" -bin factor 8 -linear ""%s[bin=detx,dety][energy>500 && energy<700]"" -zoom to 1  >& /dev/null &')")
    run_command, cmd
    file_delete, 'build_composite_obs_lock', /ALLOW_NONEXISTENT
    exit, /NO_CONFIRM
    end


From any "adaptive_smoothing" screen window, run:

  touch                build_composite_stowed_lock
  idl -queue |& tee -a build_composite_stowed.log
    .run ae
    .run
    wait_until_files_exist, 10, 'build_composite.sav'
    restore,                    'build_composite.sav'

    ; Remove one of the files (in each band) built below, to prevent diffuse_smoothing_block1 from using stale files.
    file_delete, /ALLOW_NONEXISTENT, image_spec.name+'/'+scene_name+'.bkg.img'

    build_scene, OBS_EVENT_FN=stowed_event_fn, EMAP_BASENAME='diffuse.bkg.scaling', RESOLUTION=4, scene_name, SUFFIX='.bkg',  /CREATE_IMAGE, SHOW=0, IMAGE_FILTER_SPEC=image_spec.energy_filter, IMAGE_NAME=image_spec.name, /SUM_RATES_NOT_COUNTS
    
    file_delete, 'build_composite_stowed_lock', /ALLOW_NONEXISTENT
    exit, /NO_CONFIRM
    end

The log files from these IDL sessions show ds9 calls to display the observed and stowed images.





CHECK FOR HOT COLUMNS
=====================
The ds9 session titled "DETECTOR coordinates" shows the diffuse data in DETECTOR coordinates in a soft energy band.  Carefully examine this to judge whether there are "bad" detector pixels/columns remaining that will affect your analysis.  This is unlikely if you used our L1->L2 analysis recipe.  All ObsIDs are combined, which improves SNR *if* the bad columns are stable over time and all ObsIDs used the same aimpoint and CCD configuration.  Otherwise it's a bit of a mess.

*** WE HAVE NOT FIGURED OUT WHAT TO DO WHEN YOU SEE RESIDUAL HOT COLUMNS !!! ***



ADAPTIVELY SMOOTH THE DATA
==========================
Smoothing results will be stored in a tree of subdirectories whose names record the energy band, smoothing significance, and smoother kernel type, e.g.:
  adaptive_smoothing/full_band/sig15/tophat/
     
Four files are produced by the smoothing routine:
  1. A flux map          (.flux).
  2. A sqrt(flux) map    (.sqrt.flux), to compress the data range.
  3. A flux error map    (.signif).
  4. A kernel radius map (.radius).

The degree to which 'holes' in the data are smoothed over is controlled by code in tara_smooth.pro.  

The IDL session below may be launched while the three above are still running.
From any screen window, smooth images using a signal-to-noise requirement of 15:

  nice idl -queue |& tee -a tara_smooth.log
    .run ae
    .run
    wait_until_files_exist, 10, 'build_composite.sav'
    wait,30  ; While build_scene IDL sessions are deleting stale files.
    diffuse_smoothing_block1
    exit, /NO_CONFIRM
    end

CHECK FOR ERROR/WARNING MESSAGES:

  egrep -i "WARNING|ERROR|DEBUG|halted|Stop"  build*.log | egrep -v -e "-    |DISPLAY variable|LS_COLORS|Program caused arithmetic error|has different value...Merged|print,"

  egrep -i "INFORMATION|WARNING|ERROR|DEBUG|halted|Stop|gross"  tara_smooth.log | egrep -v "DISPLAY variable|LS_COLORS|Program caused arithmetic error"
  
This code now throws a warning about negative values in these images.  We don't know what to make of this yet -- for now, record this information in your notes.
  
    
The optional BAND_NAME parameter above lets you specify a subset of the energy bands to smooth or tessellate.  If BAND_NAME is omitted, then all images of the specified scene are smoothed.

BAND_NAME can be one band name, a list of band names (as above), or a shell wildcard expression such as  BAND_NAME='soft*'.  Bands are processed in the order in which they appear in BAND_NAME.

If you wish to smooth all images using the same set of kernels that were used in a previous run, then supply the pathname to that map (e.g. 'soft_med/sig015/tophat/fullfield.diffuse_filled.radius') via the optional parameter FIXED_RADIUS_MAP_FN. You can preserve the nominal smoothing runs (rather than overwriting them) by supplying RUN_NAME.  For example:
    tara_smooth, scene_name, /TOPHAT, BAND_NAME='soft_med', RUN_NAME='kernels_from_soft_sig30', FIXED_RADIUS_MAP_FN='soft_band/sig030/tophat/fullfield.diffuse_filled.radius'




SUMMARY OF DIRECTORY STRUCTURE MADE BY SMOOTHING ABOVE

The wide band flux images made by combining narrow band flux images are:
  full_band/sum_of_6_bands/fullfield.diffuse_filled.flux
  hard_band/sum_of_2_bands/fullfield.diffuse_filled.flux
  soft_band/sum_of_4_bands/fullfield.diffuse_filled.flux

The three wide band smoothing sessions (used to define kernels for narrow band smoothing) are in their usual places:
  full_band/sig015/tophat/
  hard_band/sig015/tophat/
  soft_band/sig015/tophat/

The narrow-band smoothing sessions using wide band kernels have paths of the form:
  *_band/kernels_from_full_sig15/tophat
  *_band/kernels_from_hard_sig15/tophat
  *_band/kernels_from_soft_sig15/tophat



If you need to build a square-root version of any flux image, use the following code:
  idl
  .run
  fn = ''
  read, 'Enter path to flux image: ', fn
  if ~file_test(fn) then message,/NONAME, 'ERROR: File not found: '+fn
  fdecomp, fn, disk, item_path, item_name, item_qual
  if (item_qual NE 'flux') then message,/NONAME, 'ERROR: Filename should end in .flux.'
  img = readfits(fn, header)

  psb_xaddpar, header, 'BSCALE'  , 1.0
  psb_xaddpar, header, 'BUNIT'   , 'sqrt(photon /cm**2 /s /arcsec**2)', 'photon surface brightness'
  psb_xaddpar, header, 'HDUNAME' , 'flux_map'

  out_fn = item_path+item_name+'.sqrt.'+item_qual
  if file_test(out_fn) then message,/NONAME, 'ERROR: Remove existing sqrt image: '+out_fn
  writefits, out_fn, sqrt(img), header      
  print, 'Wrote ', out_fn
  end



#################################################################################
############# REVIEW THE SMOOTHING RESULTS ##################
#################################################################################

DRAW A REGION FILE THAT DEPICTS THE FIELD OF VIEW FOR THE DIFFUSE DATA.
=====================================================================
This region file may be used in figures that show diffuse data.
This region file may be used to define a "global" diffuse region that will be extracted.

The binary scene "mask" image created above is a convenient background for drawing these regions.
The "contour" facility in ds9 may help you create approximate field of view regions that can be cleaned up by hand.

Working in the adaptive_smoothing directory ...

  set file='fov_diffuse.reg'
  if (! -e $file) then
    printf '\nDRAW POLYGONS USING THE COLOR AND WIDTH OF THE TEMPLATE PROVIDED.\n'
    echo 'IMAGE;polygon(100,100, 500,100, 500,500, 100,500) # color=blue width=2' > $file
  endif
  ~/bin/acis_lkt4 $file
  
  ds9 -title "${TARGET}: Diffuse Field Mask" -linear fullfield.mask         -region $file -zoom to fit &
or
  ds9 -title "${TARGET}: Diffuse Field Mask" -linear nominal_fullfield.mask -region $file -zoom to fit &
if that's more appropriate.


For compatibility with other work, make these polygons THE COLOR AND WIDTH OF THE TEMPLATE PROVIDED (blue, width=2).

Save the polygon as fov_diffuse.reg, in CELESTIAL COORDINATES USING SEXAGESIMAL FORMAT.


{That sexagesimal format is needed when these regions are used to define a "global" diffuse region that will be extracted.}



DRAW A REGION FILE THAT DEPICTS THE FIELD OF VIEW FOR THE POINT SOURCE DATA.
=====================================================================
This region file may be used in figures that show point sources, and may be used to build a bounding polygon for cropping counterpart catalogs.

Still in the adaptive_smoothing directory ...

  set file='../point_sources.noindex/fov_ptsrcs.reg'
  if (! -e $file) rsync -a --info=NAME fov_diffuse.reg $file
  ~/bin/acis_lkt4 $file
  ds9 -title "${TARGET}: Point Source Field Mask" -linear ../../fullfield.target.mask -region $file -zoom to fit &

Edit fov_ptsrcs.reg to include the full outline of the data (only different from fov_diffuse.reg if there are gratings observations included in your ptsrc analysis that got removed for diffuse analysis).  Re-save the polygon as point_sources.noindex/fov_ptsrcs.reg, in CELESTIAL COORDINATES USING SEXAGESIMAL FORMAT.



REVIEW THE QUALITY OF THE STOWED BACKGROUND SCALING
=====================================================================
The normalization of the stowed backgrounds was carefully adjusted in the L1->L2 processing for each ObsID (Hickox & Markevitch, 2006, \apj, 645, 95, Section 4.1.3).  However, there are plenty of things that can go wrong, so you should verify that the background-corrected smoothed image in the energy band "scale_band" (9:12 keV) is consistent with zero.
                                                                              
The histogram of "photometry significance" made by the code below shows the distribution of pixels in the 9:12 keV "significance" image (= SNR = flux/flux_error).  This statistic is a direct measure of the consistency between photometry in the 9:12 keV band and zero.  WE GUESS THAT If the stowed background was a perfect model of the 9:12 keV events in our observation(s) then this distribution should be Normal, with zero mean and with a standard deviation of 1.0.  

  idl
    .run ae
    run_command, 'ds9 -title "scale_band" -linear scale_band/sig015/tophat/fullfield.diffuse_filled.flux &'
    dataset_1d, id, readfits('scale_band/sig015/tophat/fullfield.diffuse_*.signif'), DENSITY_TITLE='scale_band (9:12 keV)', XTITLE='photometry significance (9:12 keV)', PS_CONFIG={filename:'scale_signif.ps'}, /PRINT
  
  exit

Ideally, you would not see any structure in the smoothed scale_band image displayed in the ds9 session spawned above.  However, with enough scaling smoothed images always show "structure"; identifying significant structure is not so easy for the human eye.  The width of the smoothed image pixel distribution is NOT informative, since it can be changed by selecting a different smoothing significance target.  

If the distribution in scale_signif.ps is not centered at zero, stop and talk to Pat!!!




(optional) Build soft- and hard-band event lists.
=================================================================================
  dmcopy "full_band/fullfield.diffuse.evt[energy<2000]" soft_band/fullfield.diffuse.evt clob+
  dmcopy "full_band/fullfield.diffuse.evt[energy>2000]" hard_band/fullfield.diffuse.evt clob+
  
  ds9c -red soft_band/fullfield.diffuse.evt -bin factor 8 -green hard_band/fullfield.diffuse.evt -bin factor 8 &

The ds9c image above can show regions of unusually hard or soft emission.



#################################################################################
############# RELEASE SMOOTHED IMAGES TO COLLABORATORS (optional) ###############
#################################################################################
As psb6 on cochise, here is an example

  cd '/Volumes/cochise_TimeMachine_14TB/Dropbox (Astronomy)/'
  mkdir W43
  cd    W43
  
  set origin='/bulk/hiawatha_fast/targets/W43/data'
  
  rsync -avL  ${origin}/extract/{target.emap,target.evt} .

  gzip -v target.*
  
  rsync -a --info=NAME --relative ${origin}/extract/point_sources.noindex/./tables/observing_log.pdf    point_sources
  rsync -a --info=NAME --relative ${origin}/extract/point_sources.noindex/./tables/xray_properties.fits point_sources
  rsync -avL           ${origin}/extract/*.reg   point_sources
  
  gzip -v point_sources/tables/xray_properties.fits
  
  rsync -a --info=NAME --relative ${origin}/extract/./adaptive_smoothing/full_band/sig015/tophat/fullfield.diffuse_filled.{flux,reg} .
  rsync -a --info=NAME --relative ${origin}/extract/./adaptive_smoothing/hard_band/sig015/tophat/fullfield.diffuse_filled.{flux,reg} .
  rsync -a --info=NAME --relative ${origin}/extract/./adaptive_smoothing/soft_band/sig015/tophat/fullfield.diffuse_filled.{flux,reg} .

  foreach file ( `find . -name fullfield.diffuse_filled.flux` )
    mv -f $file temp
    dmcopy "temp[1][opt type=r4]" $file  verbose=2
    gzip -v $file
    end
  'rm' temp

  rsync -a --info=NAME --relative ${origin}/extract/./adaptive_smoothing/fov_diffuse.reg .
  
Launch DropBox app to upload to cloud.
open https://www.dropbox.com/work
Invite collaborators.





#################################################################################
(Pat and Leisa only) Copy diffuse images to archive disk so they will be backed up. 
#################################################################################

idl
    relocate_target, /CHECKIN, /BACKUP, getenv("TARGET"), ['data/extract/adaptive_smoothing/']

  exit



***THIS ENDS THE PROCESS OF CREATING DIFFUSE IMAGES.***
You can kill the "adaptive_smoothing_${TARGET}" screen session with ^a\.


If you want to proceed with spectral analysis of diffuse emission (described in the remainder of this recipe), then skip the rest of this section.

If instead you are going to put this target aside for a while, then relocate the target to the archive disk as described in this section.


First, stop editing any files (e.g. diffuse_notes.txt) in the $TARGET tree on the fast disk, as you're about to move those files.  After the move, you can resume editing those files from the archive space.


Second, remove some large files we do not want to archive:
From <target>/data/ directory:
  find pointing_*/obsid_*/instmap -name "*.instmap" -o -name "*.timemap" | grep -v full |xargs -p rm -v


Third, from the ARCHIVE machine (e.g. Hiawatha if the archive disk is hiawatha1 or hiawatha2) run the following, setting the keywords ARCHIVE_VOLUME and FAST_VOLUME (where you were working) to be appropriate for your situation.  Below, the example has ARCHIVE_VOLUME='hiawatha2', FAST_VOLUME='osceola_fast':

  cd /bulk/osceola_fast/targets
  echo $TARGET
  
  idl -queue |& tee -a relocate_target.log
    relocate_target,  /CHECKIN, getenv('TARGET'), ARCHIVE_VOLUME='hiawatha2', FAST_VOLUME='osceola_fast'
    
    exit
      
The relocate_target tool will report (in <target>.trash.log) any files that are on the archive volume (hiawatha1) but not on the "fast" volume (hiawatha_fast) where you've executed this procedure.  You should make sure you understand how that situation came to be, so you can rest assured that nothing has gone wrong.  The possible explanations for such a file are:

1. A previous relocate_target run copied the file to the fast volume and the file was later removed during the course of data processing.  That's fine, if such a removal is expected (e.g. point sources that were pruned during a patch, or files that were gzipped in the archive space and got unzipped on the fast disk).

2. The file was never copied to the fast volume, suggesting an rsync problem.

3. The file was created on the archive volume while you were doing this procedure, which suggests that you were working in both the archive and fast copies of the target---very bad.

FOLLOW THE INSTRUCTIONS IN THE DIALOG BOX TO EDIT <target>.trash.log TO REFLECT THE FILES THAT ARE TRULY TRASH, AND THEN TO REMOVE THOSE FILES.



When you're satisfied that the target is safely on the archive volume, hide the target on the fast disk.  Make sure $TARGET is defined and set to the correct target name.  Make sure you have write permission in the /trash directory on the fast disk (or the code below won't work!).

From the ARCHIVE machine:  (why archive machine??)

  pushd /bulk/osceola_fast/targets
  echo ${TARGET}
  chmod 777 ${TARGET}
  mv ${TARGET} trash
  
  popd


Do not destroy the trash in /bulk/osceola_fast/targets until you are sure that the backup of the archive disk is up-to-date.

When it is time to get rid of the trash, replace target_to_delete below with the appropriate target name and do this (if the permissions are not already enabled), running on the machine that holds the trash (Osceola in this example):

  pushd /bulk/osceola_fast/targets/trash/
  sudo chmod -R 777 target_to_delete
  
  'rm' -rf target_to_delete
  popd




#################################################################################
###### APPLY THE WVT BINNING ALGORITHM (Diehl 2006) TO THE DATA (optional) ######
#################################################################################
Working in adaptive_smoothing/

Because diffuse emission is usually very faint (often to the point of non-existence), I only tessellate the broad-band images, so we have a few counts to work with at least. 

To tessellate the entire field, simply run the following.

  nice idl -queue |& tee -a wvt_binning.log
    .run
    tara_smooth, 'fullfield', [30,40], BAND_NAME='soft_band', /WVT_BINNING, /SHOW, RUN_NAME=''
    exit, /NO_CONFIRM
    end

    
    
We often want to divide the field into region "inside" and "outside" a boundary, tessellating each zone independently (as introduced by the Carina project, and as done in 30dor). 

Copy the region that defines the zone boundary, giving it a standard name.
  cp perimeter_softband_6.5e-9.reg  inside_zone.reg
  cp perimeter_softband_6.5e-9.reg outside_zone.reg

Edit inside_zone.reg and outside_zone.reg as needed to get the correct include/exclude logic.  CIAO's "field()" syntax will be helpful, especially for outside_zone.reg.  For example:
       field()
       -polygon ....

Make sure inside_zone.reg and outside_zone.reg are in CELESTIAL COORDINATES USING SEXAGESIMAL FORMAT.


Apply our zone boundary to the scene mask in order to define two new scenes, named "inside_zone" and "outside_zone".  

  dmcopy "nominal_fullfield.mask[ sky=region(inside_zone.reg) ][opt full]"  inside_zone.mask clob+
  dmcopy "nominal_fullfield.mask[ sky=region(outside_zone.reg)][opt full]" outside_zone.mask clob+
  ds9 -lock frame wcs inside_zone.mask outside_zone.mask -region inside_zone.reg &
  
If the masks include odd regions around the field edges, then revise then, save in PHYSICAL system, and remake the masks. 
  
  
Via symlinks, we'll use the same data (emap, observed events, stowed bkg events) for these new scenes as we did for the "fullfield" scene.

  foreach band (soft_band full_band)
    echo $band
    pushd $band
    ln -s fullfield.bkg.img      inside_zone.bkg.img
    ln -s fullfield.bkg.img     outside_zone.bkg.img
  
    ln -s fullfield.diffuse.img  inside_zone.diffuse.img
    ln -s fullfield.diffuse.img outside_zone.diffuse.img
  
    ln -s fullfield.diffuse.emap  inside_zone.diffuse.emap
    ln -s fullfield.diffuse.emap outside_zone.diffuse.emap

    ll -Ld *
    popd
  end

  
Decide if you wish to specify the centers of some tessellations, e.g. to encourage a tessellate to be centered on a feature in one of your smoothed images.  Save any tessellate centers you define in a normal ds9 region file named inside_tess_centers.reg and/or outside_tess_centers.reg; use "point regions" in  CELESTIAL COORDINATES USING SEXAGESIMAL FORMAT.  Also save the IMAGE coordinates of your tess centers in "X/Y format" as inside_tess_centers_image.reg and/or outside_tess_centers_image.reg.



Now we can apply WVT Binning on these new zones.

The optional RUN_NAME parameter adds a prefix to the directory name in which the output will appear; this useful when you try tessellations using several sets of fixed centers.

If you need to overwrite an existing tessellation run, then specify DISCARD_EXISTING=1.

  idl -queue |& tee -a wvt_binning_inside.log
    .run
    fn = 'inside_tess_centers_image.reg'
    if file_test(fn) then begin
      ; If you have defined tess centers, then read their image coordinates and build the optional KEEPFIXED input to tara_smooth.
      readcol, fn, ii, jj
      keepfixed = fltarr(2, n_elements(ii))
      keepfixed[0,*] = ii-1  ; KEEPFIXED is 0-based image coordinates
      keepfixed[1,*] = jj-1  ; KEEPFIXED is 0-based image coordinates
    endif
    tara_smooth,  'inside_zone', BAND_NAME='soft_band', [70,40,50,60], /WVT_BINNING, /SHOW, KEEPFIXED=keepfixed, RUN_NAME='', DISCARD_EXISTING=1
    end


    
  idl -queue |& tee -a wvt_binning_outside.log
    fn = 'outside_tess_centers_image.reg'
    if file_test(fn) then begin
      ; If you have defined tess centers, then read their image coordinates and build the optional KEEPFIXED input to tara_smooth.
      readcol, fn, ii, jj
      keepfixed = fltarr(2, n_elements(ii))
      keepfixed[0,*] = ii-1  ; KEEPFIXED is 0-based image coordinates
      keepfixed[1,*] = jj-1  ; KEEPFIXED is 0-based image coordinates
    endif
    tara_smooth, 'outside_zone', BAND_NAME='soft_band', [70,40,50,60], /WVT_BINNING, /SHOW, KEEPFIXED=keepfixed, RUN_NAME='', DISCARD_EXISTING=1
    end

  egrep -i "INFORMATION|WARNING|ERROR|DEBUG|halted|Stop|gross"  wvt_binning*.log | egrep -v "DISPLAY variable|LS_COLORS|Program caused arithmetic error"

    
The number of tessellates each WVT run produced can be found by counting polygons in each run's region file.
  grep -Hc polygon `find . -name "*.wvtbin.reg"`


Deciding what tessellation to use to do diffuse extractions is more of an art than a science.  I generally choose whatever follows the features I see with my eye best -- since the diffuse emission is usually soft, the soft-band tesselation might be the place to start.  If your diffuse emission is harder, then of course use your judgment.  

The region files that define diffuse extractions in AE can handle a compound region, so if you want to combine a set of tessellates into a bigger region, just list them one-to-a-line in a text file and call that "combo.reg" or whatever, and give that to AE to extract.  


#################################################################################
############# HAND EDIT TESSELLATE POLYGONS  ##################
#################################################################################
The tesselation algorithm sometimes produces tesselates that span multiple separated regions.
You should review such cases to see if such regions are scientifically appropriate.

When a tesselate contains a hole in the data, two polygons will be produced.
YOU MUST EDIT THAT TESSELATE'S REGION FILE SO THE POLYGON MARKING THE "HOLE" IS "EXCLUDED" (prefixed by a minus sign).

The tessellate polygons produced by tara_smooth try to follow the boundaries of the WVT tessellates, which are defined by the *diffuse.binnumber image.  The tara_smooth tool tries to fill in any "holes" in that tessellate map, except those holes that are present in the <scene_name>.mask image.

The POLYGONS, rather than the *diffuse.binnumber image, ARE THE IMPORTANT DATA PRODUCTS---they define extraction regions for the tessellates, and they are used later to build maps of tessellate properties.  If desired, you are free to edit the tessellate polygons, e.g. as follows:
  
  pushd soft_band/sig070/wvt/
  setenv BASE "inside_zone.diffuse"
  ds9 -scale mode minmax -linear ${BASE}.binnumber -region ${BASE}.wvtbin.reg &
  
  Edit polygons as desired.
  Resave in CELESTIAL SEXAGESIMAL FORMAT.
  Split that edited set of tessellate regions into separate files, e.g. 
    rsync -av tesselates /tmp/
    
    foreach file (tesselates/*wvtbin*.reg)
      set pattern=`basename $file .reg | cut -d '.' -f3`
      echo '# Region file format: DS9 version 4.1'  > $file 
      grep $pattern ${BASE}.wvtbin.reg             >> $file 
    end
  wc -l tesselates/*wvtbin*.reg

REMEMBER TO REVIEW/REPAIR TESSELATE REGION FILES THAT CONTAIN A "HOLE" IN THE FIELD, so that the polygon marking the "hole" is prefixed by a minus sign.




#################################################################################
############# EXTRACT DIFFUSE SOURCES ##################
#################################################################################
SET UP A DIRECTORY FOR EXTRACTING DIFFUSE SOURCES and MOVE THERE

Move to a directory appropriately named for diffuse regions you wish to extract:

  * Diffuse regions defined by hand should be extracted in the existing directory extract/diffuse_sources.noindex

  * Diffuse regions defined by the WVT Binning algorithm above should be extracted in a directory whose name reflects the SNR used in the WVT Binning, e.g. extract/diffuse_wvt_sig040.


=====================================================================
SET UP A SCREEN SESSION FOR EXTRACTION OF DIFFUSE SOURCES (from non-grating ObsIDs)

Create a workspace for diffuse analysis.
Take notes in diffuse_sources.noindex/diffuse_notes.txt.
Create a new 'screen' session named after your target (***replace <target> below with your target name***), and create a 'screen' window for each ObsID_device set (usually there's an ObsID_FI and an ObsID_BI):

  cd  <target>/data/extract/

  cd diffuse_sources.noindex/
  touch diffuse_notes.txt
  tw    diffuse_notes.txt


  setenv TARGET <target>

  setenv PASS 'pass_diff1'

  setenv OBS_LIST ""
  foreach dir (`'ls' -d ../obs[0-9]* | sort --field-separator=_ --key=2 -r`)
    set obs=`basename $dir | sed -e "s/obs//"`
    if (! -d $dir) continue
    if (`dmkeypar $dir/validation.evt GRATING echo=yes` != 'NONE') then
      printf "\nObsID %s used a grating and is not suitable for diffuse analysis.\n" $obs
      continue
    endif
    setenv OBS_LIST "$OBS_LIST $obs"
    end
  echo $OBS_LIST  
  
  setenv SCREEN_ARCH ''
  if (`uname -m` == arm64)  setenv SCREEN_ARCH 'arch -arm64 tcsh'

  setenv SCREEN_NAME    AE_diffuse_${TARGET}
  setenv PROMPT              '%m::%${TARGET} (%$PASS) %% '
  screen -S $SCREEN_NAME -d -m -t "${TARGET}"  $SCREEN_ARCH 
  screen -S $SCREEN_NAME -X defslowpaste 10
  
  screen -S $SCREEN_NAME -X setenv PROMPT "%m::${TARGET} (logs) %% "
  screen -S $SCREEN_NAME -X screen -t logs  $SCREEN_ARCH
  sleep 1
  foreach obs ($OBS_LIST)
    screen -S $SCREEN_NAME -X setenv OBS     ${obs}
    screen -S $SCREEN_NAME -X setenv PROMPT "${obs} %% "
    screen -S $SCREEN_NAME -X screen -t   obs${obs}  $SCREEN_ARCH
    sleep 0.5
  end
  screen -S $SCREEN_NAME -r
  sleep 4
  screen -S $SCREEN_NAME -X select 0
  screen -S $SCREEN_NAME -X at \# slowpaste 10
  screen -S $SCREEN_NAME -X at \# stuff 'unset correct^M'
  screen -S $SCREEN_NAME -p 0 -X stuff 'echo $OBS_LIST | wc -w^M'
  screen -S $SCREEN_NAME -X windowlist -b
 
 
Double-check that you have a screen window for every ObsID.  Use ^a" to see the list of screen windows.  The last window index should be ONE LARGER than the number of ObsIDs (which was printed by the "echo" command above).  There should be a screen window called "logs" as well as one for each ObsID, plus the usual top-level $TARGET window #0.

If you are missing some screen windows, then read the appendix titled THE "SCREEN" UTILITY in the document validation_procedure.txt.



"screen" help:
  ^a"  select from list of windows using arrow keys
  ^an  next window
  ^ap  previous window
  ^ak  kill the current window
  ^aw  list the existing windows
  ^a^c create a new window and switch to it
  ^aA  name the current window
  ^a?  help
  
  screen -S $SCREEN_NAME -r    attach to existing screen windows
  screen -ls                   find all existing screen sessions
  ^ad                          detach from "screen" session
  
  When your data processing is completely finished, you can destroy each "screen" window by exiting its shell in the normal way, e.g. via "exit" or "^d".  All screen windows can be destroyed with "^a\".


  
A note about running on multiple computers:
---------------------------------------------------------------------

Most tasks in AE will generally run fastest on the computer that is hosting the disk on which the data are stored.  However, if you have more ObsIDs than the number of "cores" on that computer, and if your data are visible to other computers (via a networked filesystem), then you will want to spread your ObsID-based processing across multiple computers.  

In window #0 run the following script to print the commands needed to ssh to each machine you want to use:

  
  foreach machine (hiawatha_data cochise_data ouray_data osceola_data tecumseh_data sequoia_data ouachita_data)
    set fixed_display=`ssh $machine 'echo $DISPLAY'`

    printf '\n\nTo set up %s run:\n\n' $machine
    
    printf '  setenv WD_DESIRED "`echo $PWD|sed -e %s`"\n\n'  "'s|/Volumes/|/bulk/|'" 
    
    printf '  ssh -t %s env OBS_LIST=\\"$OBS_LIST\\" OBS=\\"$OBS\\" PROMPT=\\"%s $PROMPT\\" WD_DESIRED=\\"$WD_DESIRED\\" DISPLAY=%s caffeinate -s %s \n\n' $machine $machine $fixed_display $SHELL

    printf '  cd "$WD_DESIRED" ; pwd\n\n'
  end
  

Then, in the screen window for each ObsID that you want to be processed on a remote machine, paste the set of commands printed by the script above for the appropriate remote machine.  

*****
If the pasted "cd" command fails with "Permission denied", then on the machine hosting the data you need to run:
  sudo nfsd restart
*****



NOTE -- The best procedure to terminate the screen session is:

1. In screen window #0 run the following to paste the 'exit' command in every window, which will cleanly terminate ssh connections you have to other computers (avoiding zombie processes):.
  screen -S $SCREEN_NAME -X at \# stuff 'exit^M'
  
2. If that doesn't end the screen session, then type the screen command "^a\".


If you forgot to "exit" the ssh connections as instructed above, and instead simply killed the screen session with ^a\, the remote shells (which are running "caffeinate" to keep the machine from sleeping) will NOT DIE.  In that case then, the easiest way to kill them (and recover from your mistake) is to ssh to the machine again and run
  killall caffeinate


To put a remote machine to sleep when you're completely done with your AE analysis and don't need its processors anymore, execute
  sudo pmset sleepnow; exit


{For reference, in the ssh commands used above explicitly setting the DISPLAY environment variable is a trick to force all the ssh sessions to use the same value for $DISPLAY, which allows multiple IDL sessions on the remote machine to consume only one IDL license.  Note that 'X11 forwarding' must be configured in your ssh preferences.  Note that using absolute machine names in DISPLAY, e.g. 'cochise:0', disables the security protections afforded by the X11 forwarding mechanism in ssh and is not recommended.}



=====================================================================
EXTRACTION REGIONS & LIST OF REGIONS

-------------------------------------------------------
EXTRACTION REGIONS

By whatever clever means you can find, construct region files that define one or more diffuse ``objects''.  Since you've already masked point sources the diffuse region files do not need to worry about excluding point sources.

* The regions must be in CELESTIAL COORDINATES USING SEXAGESIMAL FORMAT.  (CIAO will not accept celestial coordinates in decimal format.)  

* Each region file can contain multiple or compound components if desired, e.g. a polygon 'minus' a circle.  

* CIAO requires that 'excluded' regions appear after 'included' regions.  

* The CIAO syntax 'field()' may NOT be used to represent the field of view (because ds9 does not recognize it)---you must use the regions in fov_diffuse.reg (in SEXAGESIMAL format) that you built earlier.

* Keep in mind what happens when an extraction region extends outside the footprint of the target's observations:
  1. Our calculation of surface brightness is NOT affected, because our calibration method uses the integral of each ObsID's exposure map within the aperture.  Thus off-field real estate is no different than masked real estate.
  
  2. The geometric area of the extraction region that we calculate (tool calculate_diffuse_geometric_areas) INCLUDES that off-field real estate.  That geometric area is used only when a region luminosity is calculated, by multiplying a surface brightness by a geometric area.  In that calculation, the measured surface brightness is assumed to be constant over the entire extraction region (including masked areas and off-field areas).  In other words, we extrapolate measured surface brightness into any unobserved portions of the extraction apertures.




-------------------------------------------------------
LIST OF REGIONS

If you've built diffuse extraction regions in adaptive_smoothing/, the modify the filename wildcard (*diffuse*.reg) in the shell script below and execute to make a list of those regions.

  'rm' diffuse_all.srclist
  foreach file (../adaptive_smoothing/*diffuse*.reg)
    echo "`basename $file .reg`  $file" >> diffuse_all.srclist
  end
  cat diffuse_all.srclist



OR, if you have tessellated the field and want to extract all the tessellates, then modify the script below.


  ln -s ../adaptive_smoothing/soft_band/sig070/wvt/*wvtbin.reg tesselates.reg
  ln -s ../adaptive_smoothing/soft_band/sig070/wvt/tesselates .
  'rm' diffuse_all.srclist
  foreach file (tesselates/*.reg)
    set name=`basename $file .reg | sed -e 's/inside_zone.diffuse.wvtbin/sig070_tess/'`
    echo "$name  $file" >> diffuse_all.srclist
  end
  cat diffuse_all.srclist



NOTES:
The file diffuse_all.srclist built above has two columns:
  * A name for the source (no spaces are allowed).
  * The path to the ds9 region file that defines the source.

Check the diffuse_all.srclist file -- you may want to edit it.  For example, I include a "diffuse_global" region that contains all diffuse events, using fov_diffuse.reg.



(optional) PERFORM ROUGH PHOTOMETRY ON REGIONS OF INTEREST
=====================================================================
You can obtain rough full-band photometry on the regions you've defined before extracting them, if you're worried that many of them may have too few counts to bother fitting.

  foreach regfile (`awk '! /^[[:space:]]*(;|$)/ {print $2}' diffuse_all.srclist`)
    printf "\n--------------------\n$regfile PHOTOMETRY"  
    foreach band (../adaptive_smoothing/500:1000_band \
                  ../adaptive_smoothing/soft_band     \
                  ../adaptive_smoothing/hard_band     \
                  ../adaptive_smoothing/full_band)
      printf "\n  net counts in %s =\n" $band
      printf "       %9.0f\n" `dmstat "$band/fullfield.diffuse.img[sky=region($regfile)]" centroid=no |grep sum | sed -e 's/   sum://'`
      printf "    -  %9.0f\n" `dmstat     "$band/fullfield.bkg.img[sky=region($regfile)]" centroid=no |grep sum | sed -e 's/   sum://'`
    end                                                                                        
  end




=====================================================================
EXTRACT OBSERVED SPECTRA IN DIFFUSE REGIONS
EXTRACT STOWED BACKGROUND SPECTRA IN DIFFUSE REGIONS

-------------------------------------------------------
From screen window #0 (named $TARGET), run the code below to verify consistency between the unmasked and masked exposure maps.  This is critical to proper subtraction of stowed background!

  idl  |& tee -a verify_masked_emap.log
    .run
    print, now()
    print
    foreach obsname, strtrim(strsplit(getenv('OBS_LIST'), /EXTRACT), 2) do begin
      obsdir = '../obs' + obsname + '/'
      fn1 = obsdir+'diffuse.emap'
      fn2 = obsdir+    'obs.emap'
      if ~file_test(fn1) then continue

      diffuse_emap = readfits(fn1)
          obs_emap = readfits(fn2)

      if ~array_equal(size(obs_emap, /DIM), size(diffuse_emap, /DIM) ) then begin
        print, fn1, fn2, F="(%' ERROR: %s and %s emaps have different dimensions!')"
        continue
      endif

      ratio = obs_emap / diffuse_emap

      ratio_range = minmax(/NAN, ratio)
      if array_equal(ratio_range, [1.0,1.0]) then $
        print, obsname, F='(%"  %12s is OK")' $
      else begin
        print, obsname, F='(%"ERROR: %8s diffuse.emap and obs.emap are NOT CONSISTENT (see plot).")'
        dataset_1d, id, ratio, BINSIZE=0.01, DATASET=obsname, XTIT='unmasked emap / masked emap', WIDGET_TITLE=getenv('TARGET')
      endelse
    endforeach
    print
    end

    exit, /NO_CONFIRM

If you see ERROR messages then YOU SHOULD NOT PROCEED!!!

The most common scenario is that after building the masked emap (diffuse.emap) you rebuilt the unmasked emap (obs.emap) using a different OBF calibration.  Correct subtraction of stowed background from your diffuse spectra requires that the two emaps have identical values (outside the masking).  The solution is to re-mask the offending ObsIDs, as described earlier in this procedure.

Your smoothed diffuse images are not affected by the obs.emap vs. diffuse.emap inconsistency discussed here.





-------------------------------------------------------
From screen window #0 (named $TARGET), run the shell script below.
STOP if any ENERGY_LO/HI keyword is not 1.0 keV.

  foreach obs ($OBS_LIST)
    echo obs${obs}
    set asol_fn=`readlink "../obs${obs}/obs.asol"`
    set  obsdir=`dirname $asol_fn`
    
    foreach file (instmap/ccd3.1000eV.instmap instmap/ccd7.1000eV.instmap \
                  asphist/ccd3.full.instmap   asphist/ccd7.full.instmap)
      if (-e $obsdir/$file) dmlist $obsdir/$file head | egrep "ENERG_LO|ENERG_HI"
    end
  end



-------------------------------------------------------
From screen window #0 (named $TARGET), run:

  foreach src (`egrep -v '^[[:space:]]*(;|$)' diffuse_all.srclist | cut -d " " -f1`)  
    echo $src
    foreach obs ($OBS_LIST)
      echo "  $obs"
      mkdir -p          $src/${obs}/
      ln -s extract.reg $src/${obs}/background.reg
    end
  end

  cat > block.txt << \STOP
    nice idl -queue |& tee -a extract_diffuse.${OBS}.log
    file_delete, "waiting_for_"+getenv("OBS"), /ALLOW_NONEXISTENT 
\STOP
  screen -S $SCREEN_NAME                -X readreg A block.txt


  cat > block.txt << \STOP
      .run ae 
      .run 
      obs    = getenv('OBS')
      obsdir = '../obs' + obs + '/'
      file_delete, obsdir+"ae_finished", /ALLOW_NONEXISTENT 
      semaphore = wait_for_cpu(LOW_PRIORITY=strmatch(obsdir,'*_BI'))

      ardlib       = obsdir + 'ardlib.par'
      pbk_filename = obsdir + 'obs.pbkfile'
      msk_filename = obsdir + 'obs.mskfile'
      aspect_fn    = obsdir + 'obs.asol'
      
      lock_fn     = obsdir + 'ae_lock'
      flag_fn     = obsdir + 'ae_finished'
      file_delete, /ALLOW_NONEXISTENT, lock_fn
      file_copy  , '/dev/null'       , lock_fn, /OVERWRITE

      acis_extract, 'diffuse_all.srclist', obs, obsdir+'diffuse.evt', /CONSTRUCT_REGIONS, /DIFFUSE
    
      acis_extract, 'diffuse_all.srclist', obs, obsdir+'diffuse.evt', /EXTRACT_SPECTRA, EMAP_FILENAME=obsdir+'diffuse.emap', EMAP_ENERGY=1.0, ARDLIB_FILENAME=ardlib, PBKFILE=pbk_filename, MSKFILE=msk_filename, ASPECT_FN=aspect_fn
      
      acis_extract, 'diffuse_all.srclist', obs, obsdir+'stowed.evt', /EXTRACT_BACKGROUNDS, EMAP_FILENAME=obsdir+'stowed.emap'

      file_move, lock_fn, obsdir+flag_fn, /OVERWRITE 
      exit, /NO_CONFIRM
      end
\STOP
  screen -S $SCREEN_NAME                -X readreg B block.txt


  foreach obs ($OBS_LIST)
    touch waiting_for_${obs}
    screen -S $SCREEN_NAME -p obs${obs} -X   paste A
    sleep 1
    while ( -e waiting_for_${obs} )
      echo "Launching IDL ..."
      sleep 1
    end
    printf "\nLaunching ObsID ${obs}\n"
    screen -S $SCREEN_NAME -p obs${obs} -X   paste B
    sleep 5
  end

  
Each ObsID will briefly open a ds9 session that is used to convert the supplied diffuse regions from celestial to physical (SKY) coordinates.



NOTES:
WARNING!  THE SURFACE BRIGHTNESS VALUES YOU DERIVE FROM SPECTRAL FITTING WILL BE WRONG if the EMAP_ENERGY parameter in AE calls does not correspond to the mono-energy used to make exposure maps.  See a discussion on how we define an "area on the sky" for a diffuse source in the Diffuse Sources section of the AE manual and in Broos et al. (2010).   


We assume that your emap was built for a single energy (supplied as the "monoenergy" parameter to mkinstmap). 


In this diffuse analysis, the  ACIS Stowed Data takes on the role of "background" in AE and in XSPEC.  Using symlinks, we force the background region (background.reg) in each extraction to be the same as the observation's aperture (extract.reg).


It's not obvious why we use the unmasked stowed data products for the background extraction, rather than the masked ones!  I think the deal is that we don't have a masked version of stowed.emap.  The assumption is that there is nothing wrong with the stowed data under the point source masks, as long as we account for its "area" by integrating the unmasked stowed.emap product.  That approach could break down if the diffuse region contained a mask that covered a significant fraction of the region, and the extra stowed data that we should have masked was unrepresentative of the observation (e.g. due to off-axis angle effects).



---------------------------------------------------------------------
As soon as the code above starts launching ObsIDs, start monitoring the progress of the AE runs by executing the following script from screen window #1.  If you see that no runs are making progress, then look at each screen window to see what has happened.  

  touch .marker
  while 1
    sleep 60
    printf '\nThe following AE runs are still making progress:\n'
    printf '  %s\n' `find . -maxdepth 1 -newer .marker -name "*.log"`
    touch .marker
    if (! `ls ../obs[0-9]*/ae_lock | wc -l`) break
  end

  egrep -i "WARNING|ERROR|DEBUG|halted|Stop" extract_diffuse.*.log | egrep -v "DISPLAY variable|LS_COLORS|arithmetic error|error=|long time|(GRATTYPE|CCD_ID)' in supplied header|CCD_ID|not observed|StopTime|LIVETIME and EXPOSURE not found"

If you see that no runs are making progress, then look for failure messages with the following:
  egrep -i "stop|halted" extract_diffuse.*.log

  
The following warning message requires action on your part:

  WARNING! Aborted extraction because WMAP is zero! You should investigate, and then remove inside008/9894/obs.stats so that the MERGE will not see it.
  
For each of these cases, display the ObsID's emap with the source's extraction region.  You will probably find that they barely overlap, which explains why the WMAP computed by dmextract was empty.  You should discard extractions like this, by removing the extraction directory or removing just the obs.stats file therein.


Look for messages about "inconsistency" between the exposure map and the ARF.  Such inconsistency can impair the calibration of the diffuse spectra.  Usually, the problem is that the emap and ARF were built with different epochs of CALDB in which the OBF filter transmission calibrations differe significantly.  See the procedure titled calibration_patch_procedure.txt.



=====================================================================
HIDE INCONSEQUENTIAL EXTRACTIONS OF EACH DIFFUSE REGION (HIGHLY RECOMMENDED!)

If you have a reasonable number of total extractions:

  foreach src (`egrep -v '^[[:space:]]*(;|$)' diffuse_all.srclist | cut -d " " -f1`)  
    printf '\n===================\n%s::\n' $src
    setenv CMD "ds9 -lock frame wcs -bin factor 8 -title '$src' "
    foreach obs ($OBS_LIST)
      set file=$src/${obs}/obs.stats
      if (! -e $file) continue
      printf '%10s : ' $obs
      dmlist $file head | grep SRC_CNTS
      setenv CMD "$CMD  $src/${obs}/source.emap -region $src/${obs}/extract.reg -zoom to fit "
    end
    eval $CMD &
  end

If some extraction regions just barely touch some ObsIDs, then you probably should HIDE THOSE INCONSEQUENTIAL EXTRACTIONS (by removing obs.stats), so that less complex weighted averaging (e.g. of ARFs and RMFs) has to be done in the merge.  For example:
  foreach extraction (diffuse2/6418_BI diffuse3/6419_BI)
    echo $extraction
    dmlist $extraction/obs.stats head |grep SRC_CNTS
    rm -i  $extraction/obs.stats
  end





=====================================================================
VERIFY THAT EVERY EXTRACTION IS FRESH AND COMPLETE 
MERGE PRODUCTS ACROSS ALL ObsIDS AND PERFORM PHOTOMETRY 

Run this even if your target has just one ObsID.  

From screen window #0 (named $TARGET), run:
  mkdir tables
  idl -queue |& tee -a merge.log
    .run ae
    .run
    ; Symlink to AE fitting scripts.
    if ~file_test('xspec_scripts') then file_link, /VERBOSE, file_which('xspec_scripts'), '.'

    wait_while_files_exist, 60, '../obs*/ae_lock'
    verify_extractions, 'diffuse_all.srclist', /DIFFUSE
    
    eband_lo = [0.5, 0.5, 2.0,  0.5, 0.7, 1.1, 0.5,   9.0 ]
    eband_hi = [7.0, 2.0, 7.0,  0.7, 1.1, 2.3, 1.0,  12.0 ]
    obsname = strtrim(strsplit(getenv('OBS_LIST'), /EXTRACT), 2)

    ; Full merge.
    acis_extract, 'diffuse_all.srclist', /MERGE_OBSERVATIONS, EBAND_LO=eband_lo, EBAND_HI=eband_hi

    ; FI and BI merges.
    foreach merge_name, ['FI','BI'] do begin
      this_obsname = obsname[where(/NULL, strmatch(obsname, '*'+merge_name+'*' ))]
      if isa(this_obsname) then begin
        acis_extract, 'diffuse_all.srclist', this_obsname, /MERGE_OBSERVATIONS, EBAND_LO=eband_lo, EBAND_HI=eband_hi, MERGE_NAME=merge_name
      endif else begin
        print, merge_name, F='(%"\nNo %s extractions to merge.")'
      endelse
    endforeach

    ; Single-ObsID merges
    foreach this_obsname, obsname do begin
      merge_name   = 'EPOCH_'+this_obsname
      acis_extract, 'diffuse_all.srclist', this_obsname, /MERGE_OBSERVATIONS, EBAND_LO=eband_lo, EBAND_HI=eband_hi, MERGE_NAME=merge_name
    endforeach

    ;;VERIFY IN ds9 THAT THE DIFFUSE REGIONS ARE WHAT YOU WANTED
    readcol, 'diffuse_all.srclist', sourcename, catalog_region_fn, FORMAT='A,A', COMMENT=';'
    sourcename = strtrim(sourcename,2)
    ind = where(sourcename NE '', num_sources)
    sourcename          = sourcename         [ind]
    catalog_region_fn   = catalog_region_fn  [ind]
    print, num_sources, F='(%"\n%d sources found in catalog.\n")'
    forprint, sourcename
    target = getenv("TARGET")
    ds9_cmd = 'ds9 -lock frame none -bin factor 8 -title "' +target+ ' Diffuse Extractions; Soft-band Flux Image" '
    for ii=0,num_sources-1 do $
      ds9_cmd +=  string(sourcename[ii],catalog_region_fn[ii], F='(%" %s/source.evt -region %s ")')
    ds9_cmd += '../adaptive_smoothing/soft_band/sum_of_4_bands/fullfield.diffuse_filled.flux -zoom to fit'
    print, ds9_cmd
    run_command, ds9_cmd+'>& /dev/null &'
    exit, /NO_CONFIRM
    end
    
  egrep -i "WARNING|ERROR|DEBUG|halted|Stop" merge*.log | egrep -v "DISPLAY variable|LS_COLORS|arithmetic error|no in-band|No HDUNAME|fitsio|FILTER|Lightcurves missing|DETNAM has different value|Merge EPOCH|kywd PRIM_OBS"


NOTES:  
If you have a very large number of diffuse regions then you can divide this merge task among several IDL processes:
      idl
        .run ae
        ae_split_srclist, 12, 'diffuse', SRCLIST_FILENAME='diffuse_all.srclist', SCREEN_NAME='fit_diffuse'+'_'+getenv("TARGET") 
        exit
    
    Then execute the shell commands suggested by ae_split_srclist to create a separate screen window for each segment.
    
    In each of those screen windows, paste the following commands to processes a catalog segment:
      nice idl -queue |& tee -a merge${SEGMENT}.log
        .run ae
        .run
        segment = getenv('SEGMENT')
        eband_lo = [0.5, 0.5, 2.0,  0.5, 0.7, 1.1, 0.5,   9.0 ]
        eband_hi = [7.0, 2.0, 7.0,  0.7, 1.1, 2.3, 1.0,  12.0 ]
    
        acis_extract, 'diffuse'+segment+'.srclist', /MERGE_OBSERVATIONS, EBAND_LO=eband_lo, EBAND_HI=eband_hi
        exit, /NO_CONFIRM
        end
        
      egrep -i "WARNING|ERROR|DEBUG|halted|Stop" merge*.log | egrep -v "DISPLAY variable|LS_COLORS|arithmetic error|no in-band|No HDUNAME|fitsio|FILTER|Lightcurves missing"



=====================================================================
CALCULATE THE GEOMETRIC AREAS OF THE DIFFUSE REGIONS

We must compute the geometric areas of the diffuse regions in order to convert surface brightness estimates to fluxes integrated over the diffuse regions.  AE cannot calculate these geometric areas for two reasons:

1. The observer is free to define a diffuse region that is compound, i.e. a polygon 'minus' a circle, since CIAO will accept such regions as spatial filters.  AE does not have the ability to parse such compound region files, and thus cannot calculate an area analytically.

2. IF the diffuse region was guaranteed to lie fully within some ObsID, then AE could calculate the geometric area of the exposure map that lies withing the region.  However, in a multi-pointing target, a diffuse region could easily be larger than the ACIS field of view.  Thus, at no point in the extraction will AE have a FITS image that fills the diffuse region.

Thus, our only choice is to force the observer to supply a "project scene image", i.e. some image that spans all the diffuse regions.  Each diffuse region can then be applied as a spatial filter on this image, using dmcopy, and the geometric area of the result can be calculated.  A Chandra SKY (PHYSICAL in ds9) coordinate system (corresponding to any ObsID) must be defined on this scene image.

In our own workflow, we generate various project-spanning multi-ObsID images, any of which serve nicely for computing region areas below.


  idl -queue |& tee geometric_areas.log
    .run ae
    .run
    ; Specify the name of a "scene" image that has a FOV the encompasses all the diffuse regions.
    ; The pixel values in this image are irrelevant.
    scene_fn    = '../adaptive_smoothing/fullfield_template.img'
    if ~file_test(scene_fn) then scene_fn = '../adaptive_smoothing/iarray.diffuse.emap'

    calculate_diffuse_geometric_areas, 'diffuse_all.srclist', scene_fn
    exit, /NO_CONFIRM
    end



=====================================================================
CHECK NORMALIZATION OF STOWED BACKGROUNDS

Here we verify that the merged background-corrected spectra have flux consistent with zero in the energy band (9-12 keV) on which the stowed scaling was derived.

From screen window #0 (named $TARGET), run:

  mkdir tables
  
  idl -queue |& tee check_scaling.log
    .run
    ; Collate each type of merge run earlier (merge.log).
    outdir  = 'tables/'
    obsname = strtrim(strsplit(getenv('OBS_LIST'), /EXTRACT), 2)
    
    collation_list = !NULL
    
    ; Full merge.
    collation_list = [collation_list, outdir+'photometry.collated']
    acis_extract, 'diffuse_all.srclist', COLLATED_FILENAME=collation_list[0], LABEL_FILENAME='label.txt'
    
    ; FI and BI merges.
    foreach merge_name, ['FI','BI'] do begin
      collated_filename=outdir+''+merge_name+'.collated'
      this_obsname = obsname[where(/NULL, strmatch(obsname, '*'+merge_name+'*' ))]
      if isa(this_obsname) then begin
        collation_list = [collation_list, collated_filename]
        acis_extract, 'diffuse_all.srclist', MERGE_NAME=merge_name, COLLATED_FILENAME=collated_filename
      endif else begin
        file_delete, /ALLOW_NONEXISTENT, collated_filename 
      endelse
    endforeach

    ; Single-ObsID merges
    foreach this_obsname, obsname do begin
      merge_name   = 'EPOCH_'+this_obsname
      collated_filename=outdir+''+merge_name+'.collated'
      collation_list = [collation_list, collated_filename]
      acis_extract, 'diffuse_all.srclist', this_obsname, MERGE_NAME=merge_name, COLLATED_FILENAME=collated_filename
    endforeach


    ; Examine stowed background normalization for the desired type of merge.
    scale_band = 7
    bt = mrdfits(collation_list[0], 1)
    forprint, indgen(n_elements(bt)), bt.LABEL, bt.SRC_SIGNIF[scale_band], F='(%"  source #%3d (%s): SNR in scale_band =%5.1f")'

    num_plots = 0
    foreach collated_filename, collation_list do begin
      bt = mrdfits(collated_filename, 1, /SILENT)
  
      if ~almost_equal(bt.ENERG_LO[scale_band], 9.0, DATA_RANGE=range) then print, scale_band, range, F='(%"\nWARNING: for Scale Band (#%d),  %0.2f <= ENERG_LO <= %0.2f; ENERG_LO should be 9.0 keV.\n")'
      if ~almost_equal(bt.ENERG_HI[scale_band], 12.0, DATA_RANGE=range) then print, scale_band, range, F='(%"\nWARNING: for Scale Band (#%d),  %0.2f <= ENERG_HI <= %0.2f; ENERG_HI should be 12.0 keV.\n")'
  
      SRC_SIGNIF = bt.SRC_SIGNIF[scale_band]
      SRC_CNTS   = bt.SRC_CNTS  [scale_band]
      NET_CNTS   = bt.NET_CNTS  [scale_band]
      nc_error   = NET_CNTS / SRC_SIGNIF
      
      if (n_elements(bt) GT 1) then begin
        dataset_1d , id1, SRC_SIGNIF, DATASET=collated_filename, DENSITY_TITLE='scale_band (9:12 keV)', XTIT='photometry significance (9:12 keV)', PS_CONFIG={filename:'scale_signif_dist.ps'}
        
        function_1d, id2, indgen(n_elements(bt))+num_plots/20., NET_CNTS/SRC_CNTS, Y_ERROR=nc_error/SRC_CNTS, DATASET=collated_filename, NSKIP_ERRORS=1, PSYM=1, LINE=6, XTIT='source number', YTIT='NET_CNTS/SRC_CNTS (9-12 keV)', TITLE='Fractional Error in background subtraction (9:12 keV)'
  
        function_1d, id2, indgen(n_elements(bt)), intarr(n_elements(bt)), COLOR='red', DATASET='perfect normalization', PS_CONFIG={filename:'scale_signif.ps'}, /PRINT
        num_plots++
      endif  
    endforeach
    end
    
    exit, /NO_CONFIRM


The histogram of "photometry significance" made by the code above shows the distribution of AE's SRC_SIGNIF statistic (= NET_CNTS / NET_CNTS_SIGMA_UP = SNR) in the 9:12 keV band over all the diffuse regions extracted.  This statistic is a direct measure of the consistency between photometry in that band and zero.  If the stowed background was a perfect model of the 9:12 keV events in our observation(s) then this distribution should be Normal with mean zero and a standard deviation of 1.0.  
If 3-sigma deviations from perfect normalization are found (SRC_SIGNIF > +/- 3), you should repeat the plotting section of the code for each single-ObsID merge (by changing "collated_filename"); that should identify which ObsIDs are having the most significant stowed scaling problem.

The code above also plots an estimate of the fractional error in each region's 9:12 keV background subtraction, with error bars.

  
  
NOTES:
Below are some of the possible problems that could lead to improper background subtraction:

0. If stowed background spectra were NOT EXTRACTED for some ObsIDs, then the photometry will be wrong.  Both the verify_extractions tool and the MERGE stage should report missing background spectra in the log file.

1. If the unmasked emap (obs.emap) and masked emap (diffuse.emap) are inconsistent, then the photometry will be wrong.  This problem should have been detected at the top of the section titled
  EXTRACT OBSERVED SPECTRA IN DIFFUSE REGIONS
where you ran code to verify consistency. 



2. The diffuse and stowed data have not been cleaned in the same way (e.g. the stowed data was scaled to match validation.evt but you have mistakenly extracted data derived from spectral.evt, different bad pixel tables were used for the two datasets, etc.).

3. There may be other reasons we haven't understood yet.  For example I see a 7.5 sigma excess in the 9:12 keV band for the "PWN" region in T-ReX, when all the other diffuse regions show insignificant deviations from zero.


NOTES:
The normalization of the stowed backgrounds was carefully adjusted in the L1->L2 processing for each ObsID (Hickox & Markevitch, 2006, \apj, 645, 95, Section 4.1.3).  



=====================================================================
PLOT THE EXTRACTED AND STOWED BACKGROUND SPECTRA
We plot the spectra up to 12 keV, to judge the scaling of the stowed background. 

  touch .marker
  idl
    .run
    acis_extract, 'diffuse_all.srclist', /FIT_SPECTRA, /GROUP_WITHOUT_BACKGROUND,$
    SNR_RANGE=[2,2], CHANNEL_RANGE=[35,822],$  ; Spectral range is 0.5:12 keV.
    MODEL_FILENAME='xspec_scripts/plot_gross_and_background_spectra.xcm' 

    plot_name = file_search('*/spectral_models/*plot_gross_and_background_spectra/ldata.ps', COUNT=num_plots)

    cmd = file_which('plot_spectra.pl')+' --doc_name=gross_and_background_spectra ' + strjoin(plot_name, ' ')
    run_command, cmd 
    exit, /NO_CONFIRM
    end
  foreach file (`find * -maxdepth 1 -newer .marker -name "*grp*.pi"`)
    'rm' -v $file
  end

{The foreach script above removes the 0.5:12 keV spectra used to make gross_and_background_spectra.ps, so they will not confuse us later.}    
    
The spectra shown are the observed extracted spectrum (black) and its scaled stowed background (red).  Above 9 keV they should have the same count rate---this may be hard to judge because the stowed spectrum resolves several lines and the observed spectrum rarely does so.


{In the AEAur spectrum I see that channels 35 and below are notably higher than channels 36 and above, so let's exclude 35 as we did in hPer.}



=====================================================================
(optional) EXPLORATORY FITTING

If you want to experiment with fitting by hand a few sources in XSPEC, you should list their names in a file (temp.srclist in the example below) and then use AE to prepare spectra that are grouped over the energy range you desire.  For example, 

  idl
    .r
    acis_extract, 'diffuse_all.srclist', /FIT_SPECTRA, MODEL_FILENAME='xspec_scripts/nofit.xcm',$
    SNR_RANGE=[1,3], NUM_GROUPS_RANGE=[32,250], CHANNEL_RANGE=[35,480]  ; Spectral range is 0.5:7 keV.
    exit, /NO_CONFIRM
    end

  ls -1 */*grp*.pi

However, keep in mind that we cannot collate results from hand fitting done in this way, because you cannot save them in the standardized form that AE expects.




=====================================================================
(optional) REVIEW SINGLE-ObsID EXTRACTION RESULTS

In each ObsID's screen session, execute the background /PLOT stage.

Each plot is automatically saved as a Postscript file; the filename is shown in the IDL plot window.  You are free to rescale them (or switch from linear to log on either or both axes) to make them more informative, and then re-save (File->Print).  You may want to record information about them in photometry_notes.txt.  

The number of points in all these plots should be the number of sources in the catalog; this allows you to middle-click on a point to recover the index of that source, in case you need to investigate outliers.  When no data are available for a source (e.g. when the source was not extracted in the ObsID you're plotting), its point appears at a "null" location, typically (0,0). 

  idl  
    .run
    obsdir = "../obs"+getenv("OBS")+"/" 
    collated_filename = obsdir + "plot.collated"
    acis_extract, "diffuse_all.srclist", getenv("OBS"), /SINGLE_OBS, VERBOSE=0 , COLLATED_FILENAME=collated_filename

    acis_extract, 'diffuse_all.srclist', getenv("OBS"), /EXTRACT_BACKGROUNDS, /PLOT, OUTPUT_DIR=obsdir, COLLATED_FILENAME=collated_filename
    end

    exit, /NO_CONFIRM
    
On a Mac, use Mission Control (formerly Expose') to see all the plots at the same time; press the space bar to magnify the plot under the mouse!


  
=====================================================================
(optional) REVIEW MULTI-ObsID EXTRACTION RESULTS

Run this review step for every target, even though a few plots will be uninformative when you only have a single ObsID.

  idl
    acis_extract, '', /MERGE_OBSERVATIONS, /PLOT, COLLATED_FILENAME='tables/photometry.collated'
    
    acis_extract,                          /SHOW, COLLATED_FILENAME='tables/photometry.collated'
    
Two styles of plots are produced.  Some show the distribution of a single source property, and others are scatter plots for two source properties that may reveal useful insights into the data. 



=====================================================================
FIT SPECTRAL MODELS TO EACH SOURCE 

We currently have two similar diffuse fitting scripts:
  diffuse_3apec.xcm
  diffuse_3pshock.xcm
Soft emission from the massive star-forming region is modeled with three apec or pshock components.
Emission from stars is modeled by two constrained apec components.
Hard emission from the Galactic Ridge is modeled by an apec component.
Emission from unresolved AGN is modeled by a powerlaw component.


---------------------------------------------------------------------
Configure Fitting Scripts

Revise the plausible ranges for model parameters found in whatever XSPEC scripts you will be using to be appropriate for your astrophysical region.  The *_min and *_max variables in the fitting scripts are used as "soft parameter limits" in XSPEC, and are used to identify "limit violations" in our table generator.  NH in these fitting scripts is in units of 1X10^22 cm^(-2).  

Since the instrumental background includes an emission line above 7 keV, we usually modify the point source fitting process in two ways:

  1. In the FIT_SPECTRA call to AE, we specify a fitting band of 0.5:7.0 keV (channels 35:480].

  2. In the diffuse fitting scripts the variables EbandLo, EbandHi, and BandName are modified to specify that full-band and hard-band fluxes are calculated on bands that stop at 7 keV instead of 8 keV.


---------------------------------------------------------------------
Perform Fitting Runs

An EXAMPLE AE fitting run is shown below.

* The "changes" script base.xcm allows us to assign appropriate model parameter ranges, initial values, etc. for our astrophysical region (instead of editing diffuse_3pshock.xcm itself).

* Choose a minimum number of spectral groups (12 in the example below) that is at least 3 plus the number of free parameters in your model.  Recall the two of the spectral groups (first and last) are going to be 'ignored' in the fit.

* Feel free to perform additional fitting runs (on your entire list of region, or on subsets) with a variety of SNR_RANGE and/or NUM_GROUPS_RANGE choices.  There is little authoritative advice we can give about the optimal number of groups you should use for any particular model or spectrum.


  nice idl -queue |& tee -a fit_diffuse_3pshock_snr4.log  
    .run
    acis_extract, 'diffuse_all.srclist', /FIT_SPECTRA, FIT_TIMEOUT=3600,$
    SNR_RANGE=[1,4], NUM_GROUPS_RANGE=[32,250], CHANNEL_RANGE=[35,480],$  ; Spectral range is 0.5:7 keV.
    MODEL_FILENAME='diffuse_3pshock.xcm', MODEL_CHANGES_FILENAME='base.xcm'

    exit, /NO_CONFIRM
    end

Examine the AE messages found in the log files to find failures.
    
  egrep -i "WARNING|ERROR|DEBUG|halted|Stop|Source:" fit*.log   |egrep -v "DISPLAY variable|LS_COLORS|arithmetic error"
  
  

Refit with parameter error estimation disabled any runs where XSPEC timed out:
  
  awk -F "[ /]+" '/process killed after/ {print $6}' fit_tbabs_2vapec+bkg.log > noerr.srclist
  wc -l noerr.srclist


---------------------------------------------------------------------
SPECIAL CASES

The default grouping specifications may not be ideal for every spectrum.

When the first or last group is very wide, reducing the energy range used for fitting can lead to a better grouped spectrum (because we ignore channels with poor SNR).

In some cases (e.g. AEAur and hPer) we find that channels 35 and below are notably higher than channels 36 and above.  This step-change must be non-physical; its origin is unknown.  In such cases, starting the energy range at 36 (instead of 35) may be sensible.




---------------------------------------------------------------------
Hand Fitting

Two strategies are available for hand fitting, when required.  One strategy is to use the /INTERACT option to the FIT_SPECTRA call.  The fitting script will pause and give the user the XSPEC prompt.  The observer can hand-fit the spectrum, using most of the usual XSPEC commands (e.g. freeze, thaw, newpar, fit, error).  The user MUST NOT change the form of the model.  When the user returns control to the script, it will proceed with calculating uncertainties and fluxes.


An alternate strategy is for the observer to start an XSPEC session manually in an extraction directory, load one of the models that AE saved (spectral_models/grpXXXX/model.xcm), hand-fit the spectrum (WITHOUT changing the form of the model), and save the model (model only, not commands to load the spectrum) to a file, e.g. ./handfit.xcm.

Ingesting those hand-fits into AE involves the following steps.

0. Make a local COPY of the AE fitting script, so you can modify it, e.g.
     cp xspec_scripts/diffuse/diffuse_3pshock.xcm  .

1. Edit the 'existing_model' and 'existing_model_savfl' variables in the fitting script (e.g. diffuse_3pshock.xcm) so that the proper hand-fit xcm file will be loaded.

2. For each grouping SNR level used for hand-fitting, make a list of the regions that used that grouping.

3. Perform one AE fitting run that loads the hand-fits and generates products needed in spectra_viewer but skips error estimation.  This will produce AE products that reflect the observer's hand-fit (by omitting error estimation, this step will not move the fit to a new minimum in parameter space).

4. Perform a second AE fitting run that loads the hand-fits and performs error estimation.  Those fits can then be compared to the hand-fits in the spectra_viewer review tool.


Below is an example of IDL code that cofuld perform these two fitting runs, with some sources using SNR=2 grouping (shown here as -v in the egrep, meaning all but the ones listed) and some using SNR=3 grouping (global, iarray, D, and F here).  The energy range of the fit (expressed here in channels) is 0.5--7 keV, appropriate for diffuse emission.

Working in /data/extract/diffuse_sources.noindex,

  idl |& tee -a fit_handfit.log  
    .run ae
    run_command, 'egrep -v "dark|global" diffuse_all.srclist > snr2.srclist'
    run_command, 'egrep    "dark|global" diffuse_all.srclist > snr3.srclist'

    .run
    ; SNR=2 regions
    acis_extract, 'snr2.srclist', /FIT_SPECTRA, FIT_TIMEOUT=3600,$
    SNR_RANGE=[1,2],NUM_GROUPS_RANGE=[32,250], CHANNEL_RANGE=[35,480],$
    MODEL_FILENAME='diffuse_3pshock.xcm', MODEL_CHANGES_FILENAME='xspec_scripts/noerr.xcm'  ; Spectral range is 0.5:7 keV.

    acis_extract, 'snr2.srclist', /FIT_SPECTRA, FIT_TIMEOUT=3600,$
    SNR_RANGE=[1,2],NUM_GROUPS_RANGE=[32,250],CHANNEL_RANGE=[35,480],$  ; Spectral range is 0.5:7 keV.
    MODEL_FILENAME='diffuse_3pshock.xcm'

    ; SNR=3 regions
    acis_extract, 'snr3.srclist', /FIT_SPECTRA, FIT_TIMEOUT=3600,$
    SNR_RANGE=[1,3],NUM_GROUPS_RANGE=[32,250],CHANNEL_RANGE=[35,480],$  ; Spectral range is 0.5:7 keV.
    MODEL_FILENAME='diffuse_3pshock.xcm', MODEL_CHANGES_FILENAME='xspec_scripts/noerr.xcm'

    acis_extract, 'snr3.srclist', /FIT_SPECTRA, FIT_TIMEOUT=3600,$
    SNR_RANGE=[1,3],NUM_GROUPS_RANGE=[32,250],CHANNEL_RANGE=[35,480],$  ; Spectral range is 0.5:7 keV.
    MODEL_FILENAME='diffuse_3pshock.xcm'
    exit, /NO_CONFIRM
    end

  egrep -i "WARNING|ERROR|DEBUG|halted|Stop" fit_handfit.log   |egrep -v "DISPLAY variable|LS_COLORS|arithmetic error|unignored|skip_errors|variable_name|status 152"


Examine the XSPEC log files to verify that the desired hand-fit .xcm files were loaded into XSPEC, and discover which fitting runs encountered a 'new best fit' during the error estimation, thus moving to a new point in parameter space.

  foreach src (`egrep -v '^[[:space:]]*(;|$)' diffuse_all.srclist | cut -d " " -f1`)  
    printf "\n---------------\n${src}\n"
    foreach log (${src}/spectral_models/*_diffuse_*/xspec_run.log)
      set model=`dirname $log`
      printf "  %35s:  %s\n" "`basename $model`" "`egrep -ih 'LOADING' $log`"
      printf "  %s\n"                            "`egrep -ih 'new best' $log`"
    end
  end


=====================================================================
VISUALLY REVIEW THE MODELS 
=====================================================================

We must declare which model is preferred for each source, using the ae_spectra_viewer tool.

That tool lets you browse through the sources, examining the spectral models for each source. When you select a model in the list, your decision is recorded in the keyword BEST_MDL in that source's source.spectra file. Regardless of whether the fits were performed with the C-statistic (on un-grouped data) or with the Chi^2 statistic (on grouped data), you are shown two plots: the cumulative un-grouped model and data, plus the grouped model and data.

DURING THIS REVIEW, BE SURE ANY NOTES YOU TAKE REFERENCE SOURCES BY THEIR LABELS rather than by the "sequence number" shown in the tool for navigational convenience. Those sequence numbers merely reflect the position of the source in the set being reviewed, not any identifying property of the sources!

The tool provides a "provisional" button which sets a keyword in that source's source.spectra file.  We use this feature to mark sources that require additional fitting (next section), beyond the standard set of models run above.

In the example below we keep a log of the IDL session as a backup record of the observer's choices.

  idl -queue |& tee -a spec_review.log
    .r ae
    .run
    acis_extract, 'diffuse_all.srclist', COLLATED_FILENAME='tables/photometry.collated', MATCH_EXISTING=0, HDUNAME='BEST_MDL'

    ; List of source properties (FITS keywords) you want displayed.
    keylist = ['NH1','KT1','TAU1', 'NH2','KT2','TAU2', 'NH3','KT3','TAU3', 'F1H7','L1CH7', 'F2H7','L2CH7', 'F3H7','L3CH7', 'L4_5CH7', 'PH7', 'CHI_SQR','HANDFIT']
    
    ; List of categories you want to assign to sources.
    category_list ='' 
    
    distance=0.0
    read, 'distance in pc: ', distance
    ae_spectra_viewer, 'tables/photometry.collated', KEYLIST=keylist, CATEGORY_LIST=category_list, DISPLAY_FILE='../target.evt', REGION_FILENAME='tesselates.reg', DISTANCE=distance
    end
    
    
    


=====================================================================
BUILD A FINAL COLLATION OF THE AE DATA PRODUCTS
=====================================================================
When you're finished with spectral fitting, then rebuild the "photometry" collation for the full catalog so that it will include the "best model" you designated for each source:

Collate declared best models; print the best model for all the sources, laid out 12 to a page.

  idl
    .run
    acis_extract, 'diffuse_all.srclist', COLLATED_FILENAME='tables/photometry.collated', MATCH_EXISTING=0, HDUNAME='BEST_MDL'

    bt = mrdfits('tables/photometry.collated',1)
    bt = bt[ where(~strmatch(bt.MODEL, 'no_fit*')) ]

    plot_name = strtrim(bt.CATALOG_NAME,2) + '/' + strtrim(bt.MERGE_NAME,2) + '/spectral_models/' + strtrim(bt.MODEL,2) + '/ldata.ps'
    ind = where(~file_test(plot_name), count)
    if (count GT 0) then begin
      print, 'The following plots are missing:'
      forprint, plot_name[ind]
      plot_name[ind] = file_which('no_model.eps')
    endif

    cmd = file_which('plot_spectra.pl')+' ' + strjoin(plot_name, ' ')
    run_command, cmd 
    exit, /NO_CONFIRM
    end
   
The code above calls a Perl script, plot_spectral.pl, that builds a Latex document (spectra.tex) that presents the XSPEC plots 12 to a page.  Latex is run, and the PostScript document spectra.ps is produced.




=====================================================================
COMPUTE LUMINOSITIES, EXAMINE DISTRIBUTIONS OF SPECTRAL PARAMETERS

At this point it is helpful to compute luminosities from XSPEC fluxes, and to display distributions of spectral parameters.  This is easily done by the tool ae_flatten_collation.  Examination of the resulting plots and of the messages produced by the tool may suggest spectral models that are suspicious and should be examined more closely.

The ae_flatten_collation tool will also identify fit parameters which violate the stated parameter ranges, omitting those from the FITS and ASCII tables is produces. In many such cases you may decide to refit with the offending parameter frozen at its limit.

The tool tries to report various problems in the spectral models, for example error flags that XSPEC reported.  You may iterate a few times re-fitting problem sources, collating, and running ae_flatten_collation.

In the DISTANCE parameter below, supply the target distance (in pc) that you want to use for luminosities.

Remove from ABUNDANCES_TO_REPORT any elements you wish to omit from the LaTeX table.  Use ABUNDANCES_TO_REPORT='' to omit all.

  pushd tables
  idl -queue |& tee flatten.log
    .run ae
    .run
    distance=0.0
    read, 'distance in pc: ', distance
    ae_flatten_collation, COLLATED_FILENAME='photometry.collated', FLAT_TABLE_FILENAME='xray_properties.fits', /DIFFUSE, DISTANCE=distance, SORT=0, /FAST, ABUNDANCES_TO_REPORT=['O','Ne','Mg','Si','S','Fe']
    end

    exit, /NO_CONFIRM
  
  egrep -i "WARNING|ERROR|DEBUG|halted|Stop" flatten.log | egrep -v "arithmetic error|Large number of counts|Maximum number of iterations|source and/or background counts are 0|tclout|s: error|dataset_1d"

*** BE SURE TO REVIEW flatten.log CAREFULLY. ***
Look especially for "s" flags in the tables of parameter 'anomalies', which indicate that the error estimation was aborted for some reason, or not attempted at all.


This tool will produce both FITS and ASCII tables.  It may be convenient for Co-I's to format the ASCII table in PDF.  I cannot get Apple's TextEdit to do this, but TextWrangler can:
  * Use File:Page Setup to define a "wide" page (perhaps 40" x 12").
  * Use File:Print to bring up print dialog box, where you can "Save as PDF".


Keep in mind that for diffuse sources AE scales the calibration (ARF) so that the spectral model derived by XSPEC is on a "per arcsec^2" basis. See Section 5.12 in the AE manual.  All ``flux'' quantities computed by XSPEC should then be understood to be surface brightness quantities with units of (erg /s /cm**2 /arcsec**2).  The ae_flatten_collation tool uses the distance to the target to produce "FitLuminosity" quantities in units of log(erg /s /pc**2).


The ae_flatten_collation tool produces histograms of various luminosities.  If you'd like to see the distribution of some combination of luminosities, you'll have to do the work yourself, for example:

  idl
    flat_table_fn   = 'xray_properties.fits'
    bt = mrdfits(flat_table_fn, 1)
  
    dataset_1d, id1, alog10(10.^(bt.FitLuminosity_1t)  + 10.^(bt.FitLuminosity_2t)  + 10.^bt.FitLuminosity_3t ), DATASET='SB1+2+3H7', BINSIZE=0.2, XTIT='log(erg /s /pc**2)'
     
    dataset_1d, id1, alog10(10.^(bt.FitLuminosity_1tc) + 10.^(bt.FitLuminosity_2tc) + 10.^bt.FitLuminosity_3tc), DATASET='SB1+2+3CH7', BINSIZE=0.2


    
=====================================================================
BUILD LaTeX TABLES FOR PUBLICATION

Since the set of elemental abundances that are fit vary among models, you will need to copy the LaTeX table template to tables/ and edit the local copy to hide abundance columns not used.  At the end of flatten.log you will find a message that gives the location of the LaTeX table template to copy, and the list of elemental abundances that should be reported.

Hide an abundance column by changing its LaTeX column type to "h".
Hide the corresponding column header by changing the \colhead{} command to \nocolhead{}.

  idl -queue |& tee hmsfr.log
    .r ae
    .run
    ; Supply local path to table templates.
    hmsfr_tables, 'xray_properties.fits', file_which('/hmsfr_tables.tex'), /DIFFUSE, THERMAL_MODEL_PATTERN='diffuse'
    end
    exit, /NO_CONFIRM
    
  egrep -i "WARNING|ERROR|DEBUG|halted|Stop" hmsfr.log



If desired, remove abundance entries that are frozen to 1.0.
  'mv' diffuse_spectroscopy_style1.tex temp
  sed -e 's:\$1.0\\\!\*\\phd\$:          :g' temp >  diffuse_spectroscopy_style1.tex

  
To view the tables, make a PDF file, then open it:

  lualatex xray_properties
  lualatex -jobname=xray_properties "\includeonly{./src_properties,./diffuse_spectroscopy_style1,./diffuse_spectroscopy_style2}\input{xray_properties}"  
  open xray_properties.pdf


In Apple's Preview application, you can rotate all the "landscaped" pages by selecting them in the sidebar and then pressing the "Apple" and "r" keys.

  
  
  
=====================================================================
CREATE MAPS FOR INTERESTING PARAMETERS OF THE SPECTRAL MODELS.
    
Create maps of interesting quantities.  Adjust the code below as appropriate for the model you have fit.

The file pixels_in_tesselate.sav in the main diffuse directory (created earlier in the section CALCULATE THE GEOMETRIC AREAS OF THE DIFFUSE REGIONS) stores the indices for image pixels enclosed by each tesselate.

  mkdir maps
  cd    maps

  idl -queue |& tee -a build_maps.log
    restore, /V, '../pixels_in_tesselate.sav'
    .r ae
    ; Specify the location of the collation holding fit parameters we want to map.
    flat_table_fn   = '../tables/xray_properties.fits'
         
    bt = mrdfits(flat_table_fn, 1, theader)
    num_sources = n_elements(bt)
                
    NAME  = strtrim(bt.NAME,2)
    LABEL = strtrim(bt.LABEL       ,2)


    .run
    if (total(/INT, NAME NE sourcename) GT 0) then message, 'ERROR: sources in collation do not correspond to sources in pixels_in_tesselate.sav'
    
    ; ---------------------------------------------------------------
    ; Build maps of sb.
    chi_map           = empty_map
    sb_soft_observed_map = empty_map
    sb_observed_map = empty_map
    sb1_observed_map = empty_map
    sb2_observed_map = empty_map
    sb3_observed_map = empty_map
    sb4_observed_map = empty_map
    sb5_observed_map = empty_map
    sb0_corrected_map         = empty_map
    sb1_corrected_map         = empty_map
    sb2_corrected_map         = empty_map
    sb3_corrected_map         = empty_map
    sb4_corrected_map         = empty_map
    sb5_corrected_map         = empty_map
    kt_avg12_map      = empty_map
    for ii=0L,num_sources-1 do begin
      ind = *pixels_in_tesselate[ii]
      
      chi_map          [ind] = (bt.ReducedChiSq)     [ii] 
  sb_soft_observed_map [ind] = (bt.FitLuminosity_s)  [ii]
       sb_observed_map [ind] = (bt.FitLuminosity_t)  [ii] 
      sb1_observed_map [ind] = (bt.FitLuminosity_1t) [ii]
      sb2_observed_map [ind] = (bt.FitLuminosity_2t) [ii]
      sb3_observed_map [ind] = (bt.FitLuminosity_3t) [ii]
      sb4_observed_map [ind] = (bt.FitLuminosity_4t) [ii]
      sb5_observed_map [ind] = (bt.FitLuminosity_5t) [ii]
      sb1_corrected_map[ind] = (bt.FitLuminosity_1tc)[ii]
      sb2_corrected_map[ind] = (bt.FitLuminosity_2tc)[ii]
      sb3_corrected_map[ind] = (bt.FitLuminosity_3tc)[ii]
      sb4_corrected_map[ind] = (bt.FitLuminosity_4tc)[ii]
      sb5_corrected_map[ind] = (bt.FitLuminosity_5tc)[ii]
    ; kt_avg12_map     [ind] = ((bt.NORM1*bt.KT1 + bt.NORM2*bt.KT2) / (bt.NORM1 + bt.NORM2))[ii]
    endfor

    ; Add some FITS keywords to map headers and write the maps.
    fxaddpar, header, 'HDUNAME', 'reduced chi^2 map'
    writefits,   'chi_map.img',   chi_map, header

    fxaddpar, header, 'BUNIT', 'log(erg /s /pc**2)'
    
    fxaddpar, header, 'HDUNAME', 'SBH2 map'
    writefits,   'SBH2_map.img',   sb_soft_observed_map, header

    fxaddpar, header, 'HDUNAME', 'SBH7 map'
    writefits,   'SBH7_map.img',   sb_observed_map, header

    fxaddpar, header, 'HDUNAME', 'SB1H7 map'
    writefits,   'SB1H7_map.img',   sb1_observed_map, header

    fxaddpar, header, 'HDUNAME', 'SB2H7 map'
    writefits,   'SB2H7_map.img',   sb2_observed_map, header

    fxaddpar, header, 'HDUNAME', 'SB3H7 map'
    writefits,   'SB3H7_map.img',   sb3_observed_map, header

    fxaddpar, header, 'HDUNAME', 'SB1+2+3H7 map'
    writefits, 'SB1+2+3H7_map.img', alog10(10.^sb1_observed_map+10.^sb2_observed_map+10.^sb3_observed_map), header

    fxaddpar, header, 'HDUNAME', 'SB4H7 map'
    writefits,   'SB4H7_map.img',   sb4_observed_map, header

    fxaddpar, header, 'HDUNAME', 'SB5H7 map'
    writefits,   'SB5H7_map.img',   sb5_observed_map, header

    
;   fxaddpar, header, 'HDUNAME',         'SB0CH7 map'
;   writefits, 'SB0CH7_map.img', sb0_corrected_map, header
    
    fxaddpar, header, 'HDUNAME',         'SB1CH7 map'
    writefits, 'SB1CH7_map.img', sb1_corrected_map, header
    
    fxaddpar, header, 'HDUNAME',         'SB2CH7 map'
    writefits, 'SB2CH7_map.img', sb2_corrected_map, header

    fxaddpar, header, 'HDUNAME',         'SB1+2+3CH7 map'
    writefits, 'SB1+2+3CH7_map.img', alog10(10.^sb1_corrected_map+10.^sb2_corrected_map+10.^sb3_corrected_map), header

;   fxaddpar, header, 'HDUNAME',         'KT_avg12_map'
;   writefits, 'KT_avg12_map.img', kt_avg12_map, header

    fxaddpar, header, 'HDUNAME',         'SB3CH7 map'
    writefits, 'SB3CH7_map.img', sb3_corrected_map, header
    
    fxaddpar, header, 'HDUNAME',         'SB4CH7 map'
    writefits, 'SB4CH7_map.img', sb4_corrected_map, header

    fxaddpar, header, 'HDUNAME',         'SB5CH7 map'
    writefits, 'SB5CH7_map.img', sb5_corrected_map, header

    end
     
    .run
    ; ---------------------------------------------------------------
    ; Loop over each fit parameter, making both a grayscale map showing the best-fit values
    ; and a color-code map that tries to depict both best-fit and confidence interval information.
    ; ---------------------------------------------------------------
    param_names = ['NH1','NH2','NH3','KT1','KT2','KT3','TAU1','TAU2','TAU3','O','Neon','MG','S','SI','FE', 'NH4','NH5','KT4','KT5']
    for jj=0,n_elements(param_names)-1 do begin

      if ~tag_exist(bt, param_names[jj], index=colnum) then begin
        print, 'WARNING!  Could not find parameter '+param_names[jj]
        continue
      endif

      cmd = ['param_value','upper_confidence_limit','lower_confidence_limit','param_range'] + $
            ' = bt.' + param_names[jj] + $
            ['','_HiLim','_LoLim','_CI']
      for kk=0,3 do begin
        if ~execute(cmd[kk]) then message, 'Cannot execute: '+cmd[kk]   
      endfor
      
      param_was_frozen = strmatch(param_range, '*\**')

      tunit_kywd = string(1+colnum,F='(%"TUNIT%d")')
      tunit      = sxpar(theader, tunit_kywd)
      
      
      ;; =================================================
      ;; Encode best-fit and confidence interval information using color.
      ;; =================================================
      min_hue = 0
      max_hue = 260
      legend_xsize = (size(empty_map,/DIM))[0] / 3
      legend_ysize = 60
      saturation_floor = 0.3
      value_floor      = 0.3
      ;----------------------------------------------------
      ; Strategy I for using color to depict confidence interval information
      ;
      ; We choose to present the best-fit values as a percentage offset from a typical value, e.g. the median value, 
      ; so that abundances of different elements can be more appropriately compared.
      reference_value = median(param_value)
      param_percentage_offset = 100 * (param_value - reference_value) / reference_value

      ; Encode these parameter value ratios using the "hue" axis of the HSV color model.
      ; We have to pick some scaling:
      ;   param_percentage_offset = -50% will be red    (hue=min_hue) and 
      ;   param_percentage_offset = +50% will be purple (hue=max_hue).
 
      hue = min_hue > ( min_hue + max_hue * (param_percentage_offset - (-50)) / (+50 - (-50)) ) < max_hue
      
      print, param_names[jj], reference_value, reference_value, min_hue, max_hue, F='(%"\nMapping the fractional offset from a typical value, (%s - %0.2f)/%0.2f, with HSV color model.\n  Hue=%d depicts a fractional offset of -50%%, Hue=%d depicts a fractional offset of +50%%")'
      
      
      ; Just as parameter values were normalized to be unitless quantities, we choose to present confidence interval information
      ; as the average percentage offset from the best-fit parameter value.
           
      upperlimit_percentage_offset = 100 * (upper_confidence_limit - param_value) / param_value  
      lowerlimit_percentage_offset = 100 * (lower_confidence_limit - param_value) / param_value  
      
      ; For display purposes, we average the upper and lower uncertainty sizes.
      legend_limit_percentage_offset = [-90,-80,-70,-60,-50,-40,-30,-20,-10,0,10,20,30,40,50,60,70,80,90]
      limit_percentage_offset       =  replicate(!values.f_nan, num_sources)
      for ii=0L,num_sources-1 do $
        limit_percentage_offset[ii] = mean(/NAN, abs([upperlimit_percentage_offset[ii], lowerlimit_percentage_offset[ii]]))
      
      ; When both upper and lower intervals are not known, assume they are large. 
      ind = where(~finite(limit_percentage_offset), count)
      if (count GT 0) then limit_percentage_offset[ind] = 200
      
      
      ; In order to encode this limit_percentage_offset via the saturation or value axes of the HSV model, we must
      ; convert to metric that is 1.0 for "good" tiles (tight confidence intervals) and is smaller for large ones. 
      ; Let's arbitrarily scale so that 
      ;   when limit_percentage_offset= 0 (zero errors) we display a saturation/value of 1.0
      ;   when limit_percentage_offset=50 (50% errors) we display a saturation/value of 0.5
      
      uncertainty_metric        = 0 > (1.0 - (0.5/50)*abs(limit_percentage_offset)        )
      legend_uncertainty_metric = 0 > (1.0 - (0.5/50)*abs(legend_limit_percentage_offset) )

      print, F='(%"  ''Value'' coordinate encodes average percentage offset from the best-fit parameter value to the upper/lower limits:\n  %%offset  Value\n  ---------------")'
      forprint, legend_limit_percentage_offset, legend_uncertainty_metric > value_floor, F='(%"     %4d   %4.2f")'
      
      ; Encode the uncertainty_metric using either the "saturation" or "value" axes of the Hue-Saturation-Value color model.
      saturation = replicate(1.0, num_sources)
      value      = uncertainty_metric 
      
      ; Build a corresponding colorbar to serve as a legend.
      hue_ramp   =     min_hue + (max_hue-min_hue)*(findgen(legend_xsize)/(legend_xsize-1))
      value_ramp = value_floor + (1.0-value_floor)*(findgen(legend_ysize)/(legend_ysize-1))
      
      make_2d,             hue_ramp, intarr(legend_ysize), legend_h, junk
      make_2d, intarr(legend_xsize),           value_ramp, junk    , legend_v
      legend_s = replicate(1.0, legend_xsize, legend_ysize)
       
      
      
      
      ;----------------------------------------------------
      ; Build the grayscale and color-coded maps.
      param_map = empty_map
      hmap      = empty_map
      smap      = empty_map
      vmap      = empty_map
      for ii=0L,num_sources-1 do begin
        ind = *pixels_in_tesselate[ii]
        
        ; MARK TILES THAT ARE FROZEN by leaving some pixels set to NaN 
        if param_was_frozen[ii] then begin
          index_to_point, ind, xind, yind, size(empty_map)
          xind -= median(xind)
          yind -= median(yind)
          good  = where(xind NE -yind)
          ind = ind[good]
        endif
        
        param_map[ind] = param_value[ii]
        
        hmap     [ind] =        hue[ii]
        smap     [ind] = saturation[ii]
        vmap     [ind] =      value[ii]
      endfor ;ii

     ;print, 'Writing maps for ', param_names[jj]
      fxaddpar, header, 'HDUNAME', param_names[jj]+' map'
      fxaddpar, header, 'BUNIT', tunit
      
      writefits, param_names[jj]+'_map.img', param_map, header
      
      ; Convert to RGB color system, adopting a floor on saturation and value so that we retain some color in all tiles.
      color_convert, legend_h, saturation_floor > legend_s < 1.0, value_floor > legend_v < 1.0, legend_r, legend_g, legend_b, /HSV_RGB
      color_convert,     hmap, saturation_floor >     smap < 1.0, value_floor >     vmap < 1.0,     rmap,     gmap,     bmap, /HSV_RGB
      
      ; Set the NaN pixels to a specific color
      ind = where(~finite(hmap), count)
      if (count GT 0) then begin
        rmap[ind] = 255
        gmap[ind] = 255
        bmap[ind] = 255
      endif

      ; Paste in the color bar legend.
      x0 = (size(empty_map,/DIM))[0] - legend_xsize
      y0 = 0
      x0 = 126
      y0 = 923
      rmap[x0,y0] = legend_r
      gmap[x0,y0] = legend_g
      bmap[x0,y0] = legend_b
      
      writefits, param_names[jj]+'_rmap.img', rmap, header
      writefits, param_names[jj]+'_gmap.img', gmap, header
      writefits, param_names[jj]+'_bmap.img', bmap, header
    endfor ;jj
    end
     
 
View the greyscale maps in ds9 with linear, minmax scaling

  ds9 -scale mode minmax -linear  SB*_map.img &
  ds9 -scale mode minmax -linear  NH*_map.img &
  ds9 -scale mode minmax -linear  KT*_map.img&
  ds9 -scale mode minmax -linear  O_map.img Neon_map.img MG_map.img S_map.img SI_map.img FE_map.img &

 
You MUST view these 3-color maps in ds9 with linear, minmax scaling to produce the intended colors!!  You must NOT RESCALE these RGB images in ds9!

  foreach name (NH1 NH2 NH3 KT1 KT2 KT3 TAU1 TAU2 TAU3 O Neon Mg S SI FE)

    ds9 -scale mode minmax -linear -rgb -rgb lock colorbar yes -rgb lock scale yes -red ${name}_rmap.img -green ${name}_gmap.img -blue ${name}_bmap.img &
  end
     
      

;      ;----------------------------------------------------
;      ; Strategy II for using color to depict confidence interval information
;      ;
;      ; We choose to normalize the best-fit values by a typical value, e.g. the "bisector" computed by 
;      ; validate_xspec_confidence_interval, so that abundances of different elements can be more appropriately compared.
;      ; Log scaling seems appropriate, and it's convenient to use log base 2 so that a ratio of 0.5 maps to -1
;      ; and a ratio of 2.0 maps to +1.  
;      param_percentage_offset = alog(param.param_value / param.bisector) / alog(2.0)
;
;      ; Encode these parameter value ratios using the "hue" axis of the HSV color model.
;      ; We have to pick some scaling:
;      ;   a ratio of 0.5 (param_percentage_offset==-1) will be red    (hue=0) and 
;      ;   a ratio of 2.0 (param_percentage_offset==+1) will be purple (hue=max_hue).
; 
;      hue = min_hue > ( min_hue + max_hue * (param_percentage_offset + 1.0) / 2.0 ) < max_hue
;      
;      
;      
;      ; Compute the significance of each tile's deviation from a canonical "bisector" value, based on where the bisector value lies with respect to the confidence interval.
;      ; Encode this deviation significance using either the "saturation" or "value" axes of the Hue-Saturation-Value color model.
;      ;
;      ; Tiles with typical parameter values (near the bisector) will be greenish, and almost always have low saturation or value since the bisector will be well within their confidence intervals.
;      ;----------------------------------------------------
;      
;      bisector_is_excluded = (param.lower_confidence_limit GE param.bisector) OR (param.upper_confidence_limit LE param.bisector)
;      
;      print, param_names[jj], F='(%"\nMapping %s ...")'
;      print, total(/INT, bisector_is_excluded), param.bisector, F='(%"These %d sources have confidence intervals that exclude the bisector value %0.2f ")'
;      
;      ; We have to pick some scaling for the confidence interval information---we choose to map the location of the bisector value within the parameter's confidence interval and to scale such that this value is 1.0 when the bisector is outside the interval.  
;      
;      deviation_significance = fltarr(num_sources)
;
;      ; Deal with cases where param_value is less than the bisector.
;      location = ((param.bisector - param.param_value) / (param.upper_confidence_limit - param.param_value))
;      
;      ind = where(location GT 0, count)
;      if (count GT 0) then deviation_significance[ind] = location[ind] 
;      
;      ; Deal with cases where param_value is greater than the bisector.
;      location = ((param.bisector - param.param_value) / (param.lower_confidence_limit - param.param_value))
;      
;      ind = where(location GT 0, count)
;      if (count GT 0) then deviation_significance[ind] = location[ind] 
;      
;      ; Encode the deviation_significance using either the "saturation" or "value" axes of the Hue-Saturation-Value color model.
;      saturation = replicate(1.0, num_sources)
;      value      = deviation_significance 



#################################################################################
############# RELEASING DATA PRODUCTS TO COLLABORATORS ##################
#################################################################################
See photometry_procedure.txt



#################### APPENDIX A: STOWED BACKGROUND DATA #################### 

The "stowed background" data in CALDB is discussed in these documents:

Hickox, R.~C., \& Markevitch, M.\ 2006, \apj, 645, 95 
http://cxc.harvard.edu/caldb/calibration/acis.html  (Blank Field Event Files section)
http://cxc.harvard.edu/cal/Links/Acis/acis/Cal_prods/bkgrnd/current/
http://cxc.harvard.edu/cal/Acis/Cal_prods/bkgrnd/acisbg/index.html
http://cxc.harvard.edu/cal/Acis/Cal_prods/bkgrnd/acisbg/data/README
 
The key recommendations for how to perform equivalent processing on the stowed and science data are found in Maxim's cookbook:
http://cxc.harvard.edu/contrib/maxim/acisbg/COOKBOOK


In this recipe we use the stowed data that was CTI corrected by the CXC (since we've now retired our corrector).


---------------------------------------------------------------------
FIND THE APPROPRIATE STOWED DATA (for EACH observation)

  # Build a list of the CALDB stowed data corresponding to the CCDs used in your observation:
  # See description of stowed data epochs "D" and "E" at http://cxc.harvard.edu/contrib/maxim/acisbg/data/README
  # Both epochs are combined for FI devices; one epoch is selected for BI devices using the observation date.
  # In March 2013 (CALDB 4.5.6 release) we decided epoch "F" can be combined with "D" and "E" for FI detectors.
  # In March 2013 (CALDB 4.5.6 release) we decided epoch "F" can be combined with "E" for S3.

  set ccdlist=` dmkeypar acis.astrometric.calibrated.subpix.evt1 DETNAM echo=yes   | sed -e 's/ACIS-//'`
  set date_obs=`dmkeypar acis.astrometric.calibrated.subpix.evt1 DATE-OBS echo=yes | cut -c1-4,6,7,9,10`
  set stowed_dir="$CALDB/data/chandra/acis/bkgrnd"
  
  printf "# ACIS-${ccdlist}\n"                                                           >  CALDB_stowed.txt
  printf "# DATE-OBS = $date_obs \n"                                                     >> CALDB_stowed.txt
  
  'ls' -1 $stowed_dir/acis[01234689]D200[059]*bgstow_cti* | egrep ".+acis[${ccdlist}].+" >> CALDB_stowed.txt
  
  if ($date_obs > 20050901) then
    'ls' -1 $stowed_dir/acis[57]D200[59]*bgstow_cti*     | egrep ".+acis[${ccdlist}].+"  >> CALDB_stowed.txt
  else
    'ls' -1 $stowed_dir/acis[57]D2000*bgstow_cti*        | egrep ".+acis[${ccdlist}].+"  >> CALDB_stowed.txt
  endif
  cat CALDB_stowed.txt


---------------------------------------------------------------------
MERGE THE STOWED DATA FILES (for EACH observation).  AS MUCH AS POSSIBLE, APPLY THE SAME CLEANING STEPS USED ON THE DIFFUSE DATA.
 
We don't have the courage to worry about the fact that the stowed background and our observation used different levels of background flare cleaning.

We don't have the courage to worry about the differences between the bad pixel table used by Maxim and the table we are using on our data.   


  punlearn dmmerge dmcopy
  
  dmmerge infile="@CALDB_stowed.txt" outfile=acis.stowed.calibrated.evt1 clobber=yes
  
  
IF THE DATAMODE of your observation is NOT VFAINT, then you must CLEAR the Clean55 bits in the stowed data so that the observation and stowed data sets will have equivalent filtering applied (status=0 filter below)!!!

  if (`dmkeypar acis.astrometric.calibrated.subpix.evt1 DATAMODE echo=yes` != 'VFAINT') then
       echo "CLEARING THE CLEAN55 BIT IN THE STOWED DATA"
       mv -f acis.stowed.calibrated.evt1 temp.evt
       dmtcalc  infile=temp.evt outfile=acis.stowed.calibrated.evt1 expression="status=status,status=X8F" clobber=yes
  endif

      
Find out what energy filter has been applied to your diffuse event data, and change the energy filter below to match!

  dmstat "diffuse.evt[cols energy]"

  dmcopy "acis.stowed.calibrated.evt1[grade=0,2,3,4,6,energy<12000,status=0]" stowed.evt


---------------------------------------------------------------------
REPROJECT THE STOWED DATA (for EACH observation)

We must add a fake TIME column to the stowed data with random times from the real observation so that reproject_events can compute appropriate sky coordinates for the stowed events.

Randomization of event positions (controlled by the parameter "random") should not be important for background data, which have no point sources.  We choose to turn it off anyway (random=-1 in call to reproject_events).

  cd obsXXXX/
  
  punlearn dmhedit dmtcalc reproject_events

  # Copy range of observation's first GTI table to TSTART and TSTOP in the stowed data.
  dmstat "diffuse.evt[3][cols START]"
  set tstart=`pget dmstat out_min`
  dmstat "diffuse.evt[3][cols STOP]"
  set  tstop=`pget dmstat out_max`
  
  dmhedit "stowed.evt[2]" filelist=none operation=add key=TSTART  value=$tstart
  dmhedit "stowed.evt[2]" filelist=none operation=add key=TSTOP   value=$tstop
  
  # Add to the stowed data a random TIME column with values on [TSTART:TSTOP].
  dmtcalc  stowed.evt  temp.evt  expression="TIME=TSTART+((TSTOP-TSTART)*(#rand))" clobber=yes
  
  # Reproject the stowed data to the tangent plane of the observation.
  reproject_events  infile=temp.evt  aspect=obs.asol  match=diffuse.evt  outfile=stowed.evt  random=-1 clobber=yes   
 

---------------------------------------------------------------------
MAKE "BACKGROUND EXPOSURE MAPS" FOR THE STOWED DATA (for EACH observation)

When extracting diffuse sources, we will want to model the instrumental background by extracting the "stowed" ACIS data falling within each source aperture.  Those "stowed" spectra will have to be appropriately scaled to match the "depth" of the ACIS observation within that aperture.  A background spectrum in AE is scaled by the ratio of two integrals---the integral of a "source" exposure map over the source aperture, and the integral of a "background" exposure map over the background aperture.  

Recall that a "source" exposure map conflates three separate effects contributing to the "depth" of the observation at each position on the sky:
  1. The effective area of the HRMA and ACIS is non-uniform across the ACIS detector.
  2. Since the pointing of Chandra is dithered, the amount of time a point on the sky fell onto the detector varies strongly with position.
  3. Since each CCD's telemetry stream is independent, each CCD can have a different total observing time.

Thus, we must construct a similar "background" exposure map that, when integrated over the same aperture, produces the appropriate scaling of the stowed spectrum.  Since effect #1 (effective area variation) is unrelated to the instrumental background, we would like it to appear in the background exposure map unaltered.  Since effect #2 (dither effects) is the same for both X-rays and instrumental background we would again like it to appear in the background exposure map unaltered.  In contrast, effect #3 (total observing time on each CCD) is clearly very different between the science observation and the calibration observations used to build the stowed dataset in CALDB.  Thus, the "background" exposure map that we desire should be constructed by appropriately scaling and then summing the single-CCD exposure maps for the observation.

Approximate scalings for the single-ObsID exposure maps could be computed from the single-CCD EXPOSUR? values found in the observation and  stowed event lists.  However, the ACIS instrumental background is known to be time-varying, and thus the proper scaling cannot be known only from observing times.  Hickox & Markevitch (2006, \apj, 645, 95, Section 4.1.3) have shown that good results can be obtained by scaling a stowed spectrum so that it matches the observed spectrum over the energy range 9-12 keV, where there are no X-rays.  Thus, the code below compares single-ObsID 9-12 keV spectra from the observation and stowed data to derive an adjustment for each single-CCD exposure map.


IT IS VITAL THAT YOU SCALE THE STOWED DATA WITH RESPECT TO WHATEVER OBSERVATION DATA YOU WILL BE USING FOR DIFFUSE ANALYSIS!!  

  1. The two event lists must cover the SAME field of view.  For example, you must not compare a masked observation event list to an unmasked stowed event list.
  
  2. The two event lists must be similarly cleaned.  For example, on VFAINT data, an event list that was not cleaned by the acis_detect_afterglow and Clean55 cleaning algorithms (spectral.evt in our recipes) may have a count rate in the 9-12 keV band that is 1.7 times that obtained when those algorithms are applied (validation.evt in our recipes).

In the code below, obsdata_filename should be the name of an UNMASKED event list that has cleaning similar to that applied to diffuse.evt.


  nice idl -queue |& tee -a ae_make_emap.log
    .run ae
    .run
    obsdata_filename = 'validation.evt'
    stowed_filename  = 'stowed.evt'
    
    bt_obs    = mrdfits(obsdata_filename, 1, obs_header)
    bt_stowed = mrdfits(stowed_filename , 1, stowed_header)
     
    ; Scaling of stowed data uses the energy range 9-12 keV (Hickox & Markevitch, 2006, \apj, 645, 95, Section 4.1.3).
    emax = max(bt_obs.energy) < max(bt_stowed.energy) < 12000
    emin = 9000
    
       obs_ccd_hist = histogram(MIN=0,MAX=9,BIN=1, (   bt_obs.ccd_id)[where((   bt_obs.energy GE emin) AND (   bt_obs.energy LE emax))])
    stowed_ccd_hist = histogram(MIN=0,MAX=9,BIN=1, (bt_stowed.ccd_id)[where((bt_stowed.energy GE emin) AND (bt_stowed.energy LE emax))])
    
    ccd_scaling = stowed_ccd_hist / float(obs_ccd_hist)
    
    ; Plot stowed and observed spectra in the scaling energy band.
    fullfield_scaling = mean(ccd_scaling[0:3])
    dataset_1d, id, (   bt_obs.energy)[where(   bt_obs.ccd_id LE 3)], DATASET='I-array observed', XTITLE='energy (eV)'
    dataset_1d, id, (bt_stowed.energy)[where(bt_stowed.ccd_id LE 3)], DATASET='I-array stowed (scaled)', NORM_ABSC=[0,15000], NORM_VAL=[fullfield_scaling,fullfield_scaling],COLOR='red',LINE=0
    
    for ii=0,9 do begin
      if ~finite(ccd_scaling[ii]) then continue
      
      ; Record the effective LIVETIMEs of the stowed data for each CCD.
      keyname = string(ii, F="(%'LIVTIME%d')")
      livtime = sxpar( obs_header, keyname )
      cmd = string(stowed_filename, keyname, ccd_scaling[ii] * livtime, F="(%'dmhedit %s filelist=none operation=add key=%s  value=%0.1f')")
      run_command, cmd
      
      ; Display a map of the mismatch significance between the stowed and observed data, for each CCD, to look for patterns.
      observ_image_spec = string(obsdata_filename, ii, emin, emax, F="(%'%s[ccd_id=%d,energy=%d:%d][bin chip=::128][opt type=r4]')")
      stowed_image_spec = string( stowed_filename, ii, emin, emax, F="(%'%s[ccd_id=%d,energy=%d:%d][bin chip=::128][opt type=r4]')")
      outfile           = string(ii, F="(%'stowed_error_significance_ccd%d.img')")
      
      cmd = string(observ_image_spec+","+stowed_image_spec, outfile, ccd_scaling[ii], (ccd_scaling[ii])^2, $
                   F="(%'dmimgcalc infile=""%s"" infile2=none outfile=%s operation=""imgout=(img1-(img2/%0.3f))/sqrt(img1+(img2/%0.3f))"" verbose=1 clob+')")          
      run_command, cmd
      run_command, "ds9 -linear "+outfile+" -zoom to fit  >& /dev/null &"
    endfor
    
    print, emin/1000., emax/1000., obsdata_filename, F="(%'Stowed data is scaled as shown below so the energy range %0.1f:%0.1f keV matches %s:')"
    forprint, indgen(10), ccd_scaling, F="(%'  CCD%d  %0.2f')"
    
    ; Use the existing observation's emap (obs.emap) as a scene template to build single-CCD emaps, scale those by the values in ccd_scaling, and then sum.

    ae_make_emap, obsdata_filename, ['trash'], CCD_LIST=['012367'], ARDLIB_FILENAME='ardlib.par', /REUSE_ASPHIST, MATCHFILE='obs.emap', ASPECT_FN='obs.asol', CCD_SCALING=ccd_scaling, SCALED_NAME='stowed.'
    end
    
  mv stowed.trash.emap stowed.emap
    
  ds9 -lock frame wcs -bin factor 8 -log diffuse.evt diffuse.emap stowed.evt stowed.emap -zoom to fit &
  





#################### APPENDIX B: MODELING CELESTIAL BACKGROUNDS #################### 
---------------------------------------------------------------------
Devise a Model for the Celestial Background (IF YOU CHOOSE TO USE THIS STRATEGY)

In any diffuse analysis it is important to choose a suitable model for the background components not represented in the stowed data (e.g. emission from extragalactic sources, the local hot bubble, and solar wind charge exchange).  Some investigators try to physically model each of these components.  Instead, we generally define a "celestial background" region within our observation, fit an arbitrary model to that spectrum, and use that frozen celestial background model in subsequent fits of our diffuse sources.

If you have several candidate regions you're considering for the role of celestial background, then it is convenient to extract each of them separately, and then try to build a celestial background model in XSPEC that fits all of them simultaneously.  During that process, plotting the fit residuals should help you decide which of the candidate regions have spectra that are similar enough to be adopted as the celestial background region.  As you build this model, make sure that you are not using any XSPEC configurations that are different from those used in the AE fitting scripts, for example the setting of the "abund" command.

You can manually apply AE's grouping algorithm to the spectra you have extracted so you can work with them in XSPEC:

  idl
    acis_extract, 'diffuse_all.srclist', /FIT_SPECTRA,$
    SNR_RANGE=[?,?], NUM_GROUPS_RANGE=[??,250], CHANNEL_RANGE=[35,480],$  ; Spectral range is 0.5:7 keV.
    MODEL_FILENAME='xspec_scripts/nofit.xcm'

The grouped spectrum produced above can be directly loaded into XSPEC and fit with whatever model you dream up.  We normally freeze all parameters of this celestial background model, then save the model out of XSPEC, e.g.

  save model background.xcm

Then edit the .xcm file you just created as follows:

  1. As documentation for how the model was derived comment out (with the # character) commands that configure physical models in XSPEC, including:
       xsect 
       abund 
       
  
  2. Remove all commands not required to define the model, including lines with these XSPEC commands:
       statistic
       method 
       cosmo 
       xset 
       systematic   
  
  3. Specify a suitable name for this celestial background model by adding "3:sky" to the "model" command, e.g.
  
       model 3:sky TBabs(apec + apec + apec) + powerlaw

Look in the AE fitting script(s) you will be using, just before the section where the source model is defined, for an optional section that shows how to load this celestial background model; remove the comment character from the 3 lines of code in that section, changing the name of your model .xcm file as required.



