Home > fvcom_prepro > get_NCEP_forcing.m

get_NCEP_forcing

PURPOSE ^

Get the required parameters from NCEP products to force FVCOM (through

SYNOPSIS ^

function data = get_NCEP_forcing(Mobj, modelTime, varargin)

DESCRIPTION ^

 Get the required parameters from NCEP products to force FVCOM (through
 any of Casename_wnd.nc, Casename_sst.nc, Casename_hfx.nc or
 Casename_pre_evap.nc).

 data = get_NCEP_forcing(Mobj, modelTime)

 DESCRIPTION:
   Using OPeNDAP, extract the necessary parameters to create an FVCOM
   forcing file. Requires the air_sea toolbox and the OPeNDAP toolbox (see
   below for where to get them).

 INPUT:
   Mobj - MATLAB mesh object. Must contain fields:
       lon, lat    - array of longitude and latitudes.
       have_lonlat - boolean to signify whether coordinates are spherical
                   or cartesian.
   modelTime - Modified Julian Date start and end times
   varargin - parameter/value pairs
       - list of variables to extract:
           'varlist', {'nshf', 'uwnd', 'vwnd'}
       - data source:
           'source', 'reanalysis1'
           'source', 'reanalysis2' [default]
           'source', '20thC'

 OUTPUT:
   data - struct of the data necessary to force FVCOM. These can be
   interpolated onto an unstructured grid in Mobj using grid2fvcom.m.

 The parameters which can be obtained from the NCEP data are:
     - u wind component (uwnd)
     - v wind component (vwnd)
     - Downward longwave radiation surface (dlwrf)
     - Net shortwave radiation surface (nswrs = uswrf - dswrf)
     - Air temperature (air)
     - Relative humidity (rhum)
     - Precipitation rate (prate)
     - Sea level pressure (pres or press)
     - Latent heat flux (lhtfl)
     - Sensible heat flux (shtfl)
     - Potential evaporation rate (pevpr)
     - Topography (topo)

 In addition to these, the momentum flux (tau) is calculated from wind
 data. Precipitation is converted from kg/m^2/s to m/s. Evaporation (Et)
 is calculated from the mean daily latent heat net flux (lhtfl) at the
 surface. Precipitation-evaporation is also created (P_E).

 This output struct also includes a land mask extracted from the pevpr
 data. If the pevpr data is not requested, then no land mask is returned.

 EXAMPLE USAGE:
   To download the default set of data (see list above):

       forcing = get_NCEP_forcing(Mobj, [51345, 51376]);

   To only download wind data:

       forcing = get_NCEP_forcing(Mobj, [51345, 51376], 'varlist', {'uwnd', 'vwnd'});

   To use the 20th Century Reanalysis 2 data:

       forcing = get_NCEP_forcing(Mobj, [51345, 51376], 'source', '20thC');

 REQUIRES:
   The air_sea toolbox:
       http://woodshole.er.usgs.gov/operations/sea-mat/air_sea-html/index.html
   The OPeNDAP toolbox (MALTAB 2011b or older only):
       http://www.opendap.org/pub/contributed/source/ml-toolbox/

 Author(s)
   Pierre Cazenave (Plymouth Marine Laboratory)
   Ricardo Torres (Plymouth Marine Laboratory)
   Rory O'Hara Murray (Marine Scotland Science)

 Revision history:
   2012-10-31 First version based on get_NCEP_L4.m.
   2013-02-14 Add support for the native OPeNDAP functions in the MATLAB
   netcdf tools. Also fix the value of data.P_E.data to be only the
   evaporation. The evaporation available from NCEP in prate is in units
   of W m^{-2} whereas FVCOM expects ms^{-1}. Rather than convert from W
   m^{-2} to ms^{-1}, use the latent heat flux at the surface with the
   density of water and the latent heat of vaporisation to estimate
   evaporation rate in ms^{-1}.
   2013-06-17 Remove the 'pevpr' variable from the data fetched from NCEP.
   The 'pevpr' data only covers land (and is therefore largely useless for
   the toolbox's need. Also, we're not actually using 'pevpr' for the
   calculation of evaporation since we're estimating that from the latent
   heat net flux ('lhtfl'), so it's superfluous anyway.
   2013-06-28 Changed the way the Matlab version is determiend. Now using
   release date rather then version number. For example version 7.13 >
   verion 7.7 but 7.13 is not greater than 7.7. (ROM)
   2013-07-01 Added the 'actual_range' attribute to the native matlab
   download section, as this is needed later when identifying the domain
   range and replacing values outside this with NaNs. (ROM)
   2013-07-18 Add support for selecting only a subset of the available
   variables from NCEP (PC).
   2013-08-02 Added the 'precision' attribute to the native matlab
   download section. Reduced the precision of the 'add_offset' and
   'scale_foctor' attributes to that specified in the netCDF file. This is
   becasue errors can occure when rescaling the data with very high
   precision scale factors and offsets. (ROM)
   2013-08-07 Update the URL from which to download the data (actually use
   NCEP Reanalysis-2 now instead of the original NMC reanalysis). This has 
   necessitated a change in some field names (slp is now pres). The NCEP
   Reanalysis-2 data don't have net {long,short}wave radiation flux data,
   so this is calcualted from the downward and upward fluxes. Also check
   the returned data from NCEP match the dimensions of the longitude and
   latitude data (particularly important for interpolation onto the
   unstructured grid with grid2fvcom). I haven't fully tested these
   changes with the third-party OPeNDAP toolbox, but I have in principle
   added the necessary support (PC).
   2013-08-08 Make the script a generic script to download either the
   original reanalysis ('reanalysis1'), the reanalysis-2 ('reanalysis2')
   or the 20th Century Reanalysis-2 ('20thC') data (PC).
   2014-02-03 Merge Rory's changes to my latest version (PC).
   2015-07-07 Add support for multi-year downloads (PC).

==========================================================================

CROSS-REFERENCE INFORMATION ^

This function calls: This function is called by:

SOURCE CODE ^

0001 function data = get_NCEP_forcing(Mobj, modelTime, varargin)
0002 % Get the required parameters from NCEP products to force FVCOM (through
0003 % any of Casename_wnd.nc, Casename_sst.nc, Casename_hfx.nc or
0004 % Casename_pre_evap.nc).
0005 %
0006 % data = get_NCEP_forcing(Mobj, modelTime)
0007 %
0008 % DESCRIPTION:
0009 %   Using OPeNDAP, extract the necessary parameters to create an FVCOM
0010 %   forcing file. Requires the air_sea toolbox and the OPeNDAP toolbox (see
0011 %   below for where to get them).
0012 %
0013 % INPUT:
0014 %   Mobj - MATLAB mesh object. Must contain fields:
0015 %       lon, lat    - array of longitude and latitudes.
0016 %       have_lonlat - boolean to signify whether coordinates are spherical
0017 %                   or cartesian.
0018 %   modelTime - Modified Julian Date start and end times
0019 %   varargin - parameter/value pairs
0020 %       - list of variables to extract:
0021 %           'varlist', {'nshf', 'uwnd', 'vwnd'}
0022 %       - data source:
0023 %           'source', 'reanalysis1'
0024 %           'source', 'reanalysis2' [default]
0025 %           'source', '20thC'
0026 %
0027 % OUTPUT:
0028 %   data - struct of the data necessary to force FVCOM. These can be
0029 %   interpolated onto an unstructured grid in Mobj using grid2fvcom.m.
0030 %
0031 % The parameters which can be obtained from the NCEP data are:
0032 %     - u wind component (uwnd)
0033 %     - v wind component (vwnd)
0034 %     - Downward longwave radiation surface (dlwrf)
0035 %     - Net shortwave radiation surface (nswrs = uswrf - dswrf)
0036 %     - Air temperature (air)
0037 %     - Relative humidity (rhum)
0038 %     - Precipitation rate (prate)
0039 %     - Sea level pressure (pres or press)
0040 %     - Latent heat flux (lhtfl)
0041 %     - Sensible heat flux (shtfl)
0042 %     - Potential evaporation rate (pevpr)
0043 %     - Topography (topo)
0044 %
0045 % In addition to these, the momentum flux (tau) is calculated from wind
0046 % data. Precipitation is converted from kg/m^2/s to m/s. Evaporation (Et)
0047 % is calculated from the mean daily latent heat net flux (lhtfl) at the
0048 % surface. Precipitation-evaporation is also created (P_E).
0049 %
0050 % This output struct also includes a land mask extracted from the pevpr
0051 % data. If the pevpr data is not requested, then no land mask is returned.
0052 %
0053 % EXAMPLE USAGE:
0054 %   To download the default set of data (see list above):
0055 %
0056 %       forcing = get_NCEP_forcing(Mobj, [51345, 51376]);
0057 %
0058 %   To only download wind data:
0059 %
0060 %       forcing = get_NCEP_forcing(Mobj, [51345, 51376], 'varlist', {'uwnd', 'vwnd'});
0061 %
0062 %   To use the 20th Century Reanalysis 2 data:
0063 %
0064 %       forcing = get_NCEP_forcing(Mobj, [51345, 51376], 'source', '20thC');
0065 %
0066 % REQUIRES:
0067 %   The air_sea toolbox:
0068 %       http://woodshole.er.usgs.gov/operations/sea-mat/air_sea-html/index.html
0069 %   The OPeNDAP toolbox (MALTAB 2011b or older only):
0070 %       http://www.opendap.org/pub/contributed/source/ml-toolbox/
0071 %
0072 % Author(s)
0073 %   Pierre Cazenave (Plymouth Marine Laboratory)
0074 %   Ricardo Torres (Plymouth Marine Laboratory)
0075 %   Rory O'Hara Murray (Marine Scotland Science)
0076 %
0077 % Revision history:
0078 %   2012-10-31 First version based on get_NCEP_L4.m.
0079 %   2013-02-14 Add support for the native OPeNDAP functions in the MATLAB
0080 %   netcdf tools. Also fix the value of data.P_E.data to be only the
0081 %   evaporation. The evaporation available from NCEP in prate is in units
0082 %   of W m^{-2} whereas FVCOM expects ms^{-1}. Rather than convert from W
0083 %   m^{-2} to ms^{-1}, use the latent heat flux at the surface with the
0084 %   density of water and the latent heat of vaporisation to estimate
0085 %   evaporation rate in ms^{-1}.
0086 %   2013-06-17 Remove the 'pevpr' variable from the data fetched from NCEP.
0087 %   The 'pevpr' data only covers land (and is therefore largely useless for
0088 %   the toolbox's need. Also, we're not actually using 'pevpr' for the
0089 %   calculation of evaporation since we're estimating that from the latent
0090 %   heat net flux ('lhtfl'), so it's superfluous anyway.
0091 %   2013-06-28 Changed the way the Matlab version is determiend. Now using
0092 %   release date rather then version number. For example version 7.13 >
0093 %   verion 7.7 but 7.13 is not greater than 7.7. (ROM)
0094 %   2013-07-01 Added the 'actual_range' attribute to the native matlab
0095 %   download section, as this is needed later when identifying the domain
0096 %   range and replacing values outside this with NaNs. (ROM)
0097 %   2013-07-18 Add support for selecting only a subset of the available
0098 %   variables from NCEP (PC).
0099 %   2013-08-02 Added the 'precision' attribute to the native matlab
0100 %   download section. Reduced the precision of the 'add_offset' and
0101 %   'scale_foctor' attributes to that specified in the netCDF file. This is
0102 %   becasue errors can occure when rescaling the data with very high
0103 %   precision scale factors and offsets. (ROM)
0104 %   2013-08-07 Update the URL from which to download the data (actually use
0105 %   NCEP Reanalysis-2 now instead of the original NMC reanalysis). This has
0106 %   necessitated a change in some field names (slp is now pres). The NCEP
0107 %   Reanalysis-2 data don't have net {long,short}wave radiation flux data,
0108 %   so this is calcualted from the downward and upward fluxes. Also check
0109 %   the returned data from NCEP match the dimensions of the longitude and
0110 %   latitude data (particularly important for interpolation onto the
0111 %   unstructured grid with grid2fvcom). I haven't fully tested these
0112 %   changes with the third-party OPeNDAP toolbox, but I have in principle
0113 %   added the necessary support (PC).
0114 %   2013-08-08 Make the script a generic script to download either the
0115 %   original reanalysis ('reanalysis1'), the reanalysis-2 ('reanalysis2')
0116 %   or the 20th Century Reanalysis-2 ('20thC') data (PC).
0117 %   2014-02-03 Merge Rory's changes to my latest version (PC).
0118 %   2015-07-07 Add support for multi-year downloads (PC).
0119 %
0120 %==========================================================================
0121 
0122 subname = 'get_NCEP_forcing';
0123 
0124 % Define date that matlab version 7.14 was released.
0125 % OPeNDAP was included in version 7.14
0126 % see http://en.wikipedia.org/wiki/MATLAB and
0127 % https://publicwiki.deltares.nl/display/OET/OPeNDAP+access+with+Matlab
0128 version_7_14_date = datenum(2012,3,1);
0129 %version_7_13_date = datenum(2011,9,1);
0130 
0131 % Depending on the MATLAB version, either use the native netcdf
0132 % libraries to load the OPeNDAP data, otherwise we need the relevant
0133 % third-party toolbox.
0134 out = ver('MATLAB');
0135 % Look at the date rather than the version number
0136 native_netcdf = datenum(out.Date) >= version_7_14_date;
0137 
0138 global ftbverbose;
0139 if ftbverbose
0140     fprintf('\nbegin : %s\n', subname)
0141 end
0142 
0143 % Parse the input arguments
0144 src = 'reanalysis2';
0145 varlist = [];
0146 if nargin > 2
0147     for a = 1:2:nargin - 2
0148         switch varargin{a}
0149             case 'varlist'
0150                 varlist = varargin{a + 1};
0151             case 'source'
0152                 src = varargin{a + 1};
0153         end
0154     end
0155 end
0156 if ftbverbose
0157     fprintf('Extracting %s data.\n', src)
0158 end
0159 
0160 % Get the extent of the model domain (in spherical)
0161 if ~Mobj.have_lonlat
0162     error('Need spherical coordinates to extract the forcing data')
0163 else
0164     % Add a buffer of one grid cell in latitude and two in longitude to
0165     % make sure the model domain is fully covered by the extracted data.
0166     [dx, dy] = deal(2.5, 2.5); % approximate NCEP resolution in degrees
0167     extents = [min(Mobj.lon(:))-(2*dx), max(Mobj.lon(:))+(2*dx), min(Mobj.lat(:))-dy, max(Mobj.lat(:))+dy];
0168 end
0169 
0170 % if modelTime(end) - modelTime(1) > 365
0171 %     error('Can''t (yet) process more than a year at a time.')
0172 % end
0173 
0174 % Create year and month arrays for the period we've been given.
0175 [yyyy, mm, dd, HH, MM, SS] = mjulian2greg(modelTime);
0176 dates = datenum([yyyy; mm; dd; HH; MM; SS]');
0177 serial = dates(1):dates(2);
0178 [years, ~, ~, ~, ~, ~] = datevec(serial);
0179 years = unique(years, 'stable');
0180 nt = length(years);
0181 
0182 for t = 1:nt
0183     year = years(t);
0184 
0185     % Set up a struct of the NCEP remote locations in which we're interested.
0186     % This list depends on the value of src (default is reanalysis2).
0187     switch src
0188         case 'reanalysis1'
0189             url = 'http://www.esrl.noaa.gov/psd/thredds/dodsC/Datasets/ncep.reanalysis/';
0190             % Set up a struct of the NCEP remote locations in which we're interested.
0191             ncep.uwnd = [url, 'surface_gauss/uwnd.10m.gauss.', num2str(year), '.nc'];
0192             ncep.vwnd = [url, 'surface_gauss/vwnd.10m.gauss.', num2str(year), '.nc'];
0193             ncep.nlwrs = [url, 'surface_gauss/nlwrs.sfc.gauss.', num2str(year), '.nc'];
0194             ncep.nswrs = [url, 'surface_gauss/nswrs.sfc.gauss.', num2str(year), '.nc'];
0195             ncep.air = [url, 'surface_gauss/air.2m.gauss.', num2str(year), '.nc'];
0196             ncep.rhum = [url, 'surface/rhum.sig995.', num2str(year), '.nc'];
0197             ncep.prate = [url, 'surface_gauss/prate.sfc.gauss.', num2str(year), '.nc'];
0198             ncep.slp = [url, 'surface/slp.', num2str(year), '.nc'];
0199             ncep.lhtfl = [url, 'surface_gauss/lhtfl.sfc.gauss.', num2str(year), '.nc'];
0200             ncep.shtfl = [url, 'surface_gauss/shtfl.sfc.gauss.', num2str(year), '.nc'];
0201             ncep.pevpr = [url, 'surface_gauss/pevpr.sfc.gauss.', num2str(year), '.nc'];
0202 
0203             % The fields below can be used to create the net shortwave and
0204             % longwave fluxes if the data you're using don't include net
0205             % fluxes. Subtract the upward from downward fluxes to get net
0206             % fluxes (net = down - up).
0207             ncep.dswrf = [url, 'surface_gauss/dswrf.sfc.gauss.', num2str(year), '.nc'];
0208             ncep.uswrf = [url, 'surface_gauss/uswrf.sfc.gauss.', num2str(year), '.nc'];
0209             ncep.dlwrf = [url, 'surface_gauss/dlwrf.sfc.gauss.', num2str(year), '.nc'];
0210             ncep.ulwrf = [url, 'surface_gauss/ulwrf.sfc.gauss.', num2str(year), '.nc'];
0211 
0212         case 'reanalysis2'
0213             url = 'http://www.esrl.noaa.gov/psd/thredds/dodsC/Datasets/ncep.reanalysis2/';
0214             % Grab the topo to mask off the land values (makes the
0215             % interpolation to an FVCOM domain more sensible). This is
0216             % geopotential height, so not really that useful in the end.
0217             % I'll leave it in in case its of some use to someone.
0218             ncep.topo  = [url, 'surface/topo.sfc.nc'];
0219 
0220             % Get the forcing data.
0221             ncep.uwnd   = [url, 'gaussian_grid/uwnd.10m.gauss.', num2str(year), '.nc'];
0222             ncep.vwnd   = [url, 'gaussian_grid/vwnd.10m.gauss.', num2str(year), '.nc'];
0223             ncep.air    = [url, 'gaussian_grid/air.2m.gauss.', num2str(year), '.nc'];
0224             ncep.rhum   = [url, 'pressure/rhum.', num2str(year), '.nc'];
0225             ncep.prate  = [url, 'gaussian_grid/prate.sfc.gauss.', num2str(year), '.nc'];
0226             ncep.pres   = [url, 'surface/pres.sfc.', num2str(year), '.nc'];
0227             ncep.lhtfl  = [url, 'gaussian_grid/lhtfl.sfc.gauss.', num2str(year), '.nc'];
0228             ncep.shtfl  = [url, 'gaussian_grid/shtfl.sfc.gauss.', num2str(year), '.nc'];
0229 
0230             % The NCEP reanalysis data include net radiation fluxes whereas
0231             % the reanalysis-2 data don't. Instead, we calculate nswrs and
0232             % nlwrs from the downward and upward fluxes.
0233             % ncep.nlwrs  = [url, 'gaussian_grid/nlwrs.sfc.gauss.', num2str(year), '.nc'];
0234             % ncep.nswrs  = [url, 'gaussian_grid/nswrs.sfc.gauss.', num2str(year), '.nc'];
0235 
0236             % Evaporation is given in W/m^{2} whereas we want m/s. We
0237             % estimate evaporation from lhtfl instead and call it Et.
0238             % Instead we'll use this as a land mask since pevpr is only
0239             % given on land.
0240             ncep.pevpr  = [url, 'gaussian_grid/pevpr.sfc.gauss.', num2str(year), '.nc'];
0241 
0242             % The fields below can be used to create the net shortwave and
0243             % longwave fluxes if the data you're using don't include net
0244             % fluxes. Subtract the upward from downward fluxes to get net
0245             % fluxes (net = down - up).
0246             ncep.dswrf  = [url, 'gaussian_grid/dswrf.sfc.gauss.', num2str(year), '.nc'];
0247             ncep.uswrf  = [url, 'gaussian_grid/uswrf.sfc.gauss.', num2str(year), '.nc'];
0248             ncep.dlwrf  = [url, 'gaussian_grid/dlwrf.sfc.gauss.', num2str(year), '.nc'];
0249             ncep.ulwrf  = [url, 'gaussian_grid/ulwrf.sfc.gauss.', num2str(year), '.nc'];
0250         case '20thC'
0251             % Set up a struct of the NCEP remote locations in which we're interested.
0252             url = 'http://www.esrl.noaa.gov/psd/thredds/dodsC/Datasets/20thC_ReanV2/';
0253 
0254             % Get the forcing data.
0255             ncep.uwnd   = [url, 'gaussian/monolevel/uwnd.10m.', num2str(year), '.nc'];
0256             ncep.vwnd   = [url, 'gaussian/monolevel/vwnd.10m.', num2str(year), '.nc'];
0257             ncep.air    = [url, 'gaussian/monolevel/air.2m.', num2str(year), '.nc'];
0258             ncep.rhum   = [url, 'pressure/rhum.', num2str(year), '.nc'];
0259             ncep.prate  = [url, 'gaussian/monolevel/prate.', num2str(year), '.nc'];
0260             ncep.press  = [url, 'gaussian/monolevel/press.sfc.', num2str(year), '.nc'];
0261             ncep.lhtfl  = [url, 'gaussian/monolevel/lhtfl.', num2str(year), '.nc'];
0262             ncep.shtfl  = [url, 'gaussian/monolevel/shtfl.', num2str(year), '.nc'];
0263 
0264             % The NCEP reanalysis data include net radiation fluxes whereas
0265             % the 20th Century Reanalysis-2 data don't. Instead, we
0266             % calculate nswrs and nlwrs from the downward and upward
0267             % fluxes.
0268             % ncep.nlwrs  = [url, 'gaussian/monolevel/nlwrs.sfc.', num2str(year), '.nc'];
0269             % ncep.nswrs  = [url, 'gaussian/monolevel/nswrs.sfc.', num2str(year), '.nc'];
0270 
0271             % Evaporation is given in W/m^{2} whereas we want m/s. We
0272             % estimate evaporation from lhtfl instead and call it Et.
0273             % Instead we'll use this as a land mask since pevpr is only
0274             % given on land.
0275             ncep.pevpr  = [url, 'gaussian/monolevel/pevpr.', num2str(year), '.nc'];
0276 
0277             % The fields below can be used to create the net shortwave and
0278             % longwave fluxes if the data you're using don't include net
0279             % fluxes. Subtract the downward from upward fluxes to get net
0280             % fluxes.
0281             ncep.dswrf  = [url, 'gaussian/monolevel/dswrf.sfc.', num2str(year), '.nc'];
0282             ncep.uswrf  = [url, 'gaussian/monolevel/uswrf.sfc.', num2str(year), '.nc'];
0283             ncep.dlwrf  = [url, 'gaussian/monolevel/dlwrf.sfc.', num2str(year), '.nc'];
0284             ncep.ulwrf  = [url, 'gaussian/monolevel/ulwrf.sfc.', num2str(year), '.nc'];
0285         otherwise
0286             error('Unrecognised ''source'' type. Valid values are ''reanalysis1'', ''reanalysis2'', ''20thC''.')
0287     end
0288 
0289     fields = fieldnames(ncep);
0290 
0291     if ftbverbose
0292         fprintf('Downloading for %04d\n', year)
0293     end
0294 
0295     for aa = 1:length(fields)
0296 
0297         % We've been given a list of variables to do, so skip those that
0298         % aren't in the list.
0299         if ~isempty(varlist) && max(strcmp(fields{aa}, varlist)) ~= 1
0300             continue
0301         end
0302 
0303         if ftbverbose
0304             fprintf('getting ''%s'' data... ', fields{aa})
0305         end
0306 
0307         if t == 1
0308             data.(fields{aa}).data = [];
0309             data.(fields{aa}).time = [];
0310             data.(fields{aa}).lat = [];
0311             data.(fields{aa}).lon = [];
0312             data_attributes.(fields{aa}) = [];
0313         end
0314 
0315         % Depending on the MATLAB version, either use the native netcdf
0316         % libraries to load the OPeNDAP data, otherwise we need the relevant
0317         % third-party toolbox.
0318         if native_netcdf
0319 
0320             %ncid_info = ncinfo(ncep.(fields{aa}));
0321             ncid = netcdf.open(ncep.(fields{aa}));
0322 
0323             % If you don't know what it contains, start by using the
0324             % 'netcdf.inq' operation:
0325             %[numdims,numvars,numglobalatts,unlimdimid] = netcdf.inq(ncid);
0326             varid = netcdf.inqVarID(ncid, 'time');
0327             data_time.time = netcdf.getVar(ncid, varid, 'double');
0328             if strcmpi(fields{aa}, 'topo')
0329                 % The topography variable isn't called topo but hgt. Why is
0330                 % beyond me.
0331                 varid = netcdf.inqVarID(ncid, 'hgt');
0332             else
0333                 varid = netcdf.inqVarID(ncid, (fields{aa}));
0334             end
0335 
0336             data_attributes.(fields{aa}).(fields{aa}).scale_factor = ...
0337                 netcdf.getAtt(ncid,varid,'scale_factor','double');
0338             data_attributes.(fields{aa}).(fields{aa}).add_offset = ...
0339                 netcdf.getAtt(ncid,varid,'add_offset','double');
0340             data_attributes.(fields{aa}).(fields{aa}).unpacked_valid_range = ...
0341                 netcdf.getAtt(ncid, varid, 'unpacked_valid_range');
0342 
0343             data_attributes.(fields{aa}).(fields{aa}).actual_range = ...
0344                 netcdf.getAtt(ncid,varid,'actual_range','double');
0345             data_attributes.(fields{aa}).(fields{aa}).precision = ...
0346                 netcdf.getAtt(ncid,varid,'precision','double');
0347 
0348             % Change the precision of the attributes to avoid errors
0349             precision = 10^data_attributes.(fields{aa}).(fields{aa}).precision;
0350             data_attributes.(fields{aa}).(fields{aa}).scale_factor = ...
0351                 round(precision*data_attributes.(fields{aa}).(fields{aa}).scale_factor)./precision;
0352             data_attributes.(fields{aa}).(fields{aa}).add_offset   = ...
0353                 round(precision*data_attributes.(fields{aa}).(fields{aa}).add_offset)./precision;
0354 
0355             varid = netcdf.inqVarID(ncid,'lon');
0356             data_lon.lon = netcdf.getVar(ncid,varid,'double');
0357             varid = netcdf.inqVarID(ncid,'lat');
0358             data_lat.lat = netcdf.getVar(ncid,varid,'double');
0359             % Some of the NCEP Reanalysis 2 data are 4D, but with a single
0360             % vertical level (e.g. uwnd, vwnd, air, rhum).
0361             data_level_idx = [];
0362             try % not all data have a 'level', so fail gracefully here.
0363                 varid = netcdf.inqVarID(ncid, 'level');
0364                 data_level.level = netcdf.getVar(ncid, varid, 'double');
0365                 if length(data_level.level) > 1
0366                     % Assume we've got rhum and we want humidity at the sea
0367                     % surface (1013 millibars (or hPa)). As such, ZQQ must be
0368                     % 0.0 in the FVCOM model namelist. Find the closest level
0369                     % to pressure at 1 standard atmosphere.
0370                     [~, data_level_idx] = min(abs(data_level.level - 1013));
0371                 end
0372             end
0373             if isempty(data_level_idx) % default to the first
0374                 data_level_idx = 1;
0375             end
0376 
0377             if strcmpi(src, 'reanalysis1')
0378                 timevec = datevec((data_time.time) / 24 + 365);
0379             else
0380                 timevec = datevec((data_time.time / 24) + datenum(1800, 1, 1, 0, 0, 0));
0381             end
0382 
0383 
0384         else
0385             % We'll use the third-party OPeNDAP toolbox.
0386             data_time = loaddap([ncep.(fields{aa}),'?time']);
0387             data_attributes.(fields{aa}) = loaddap('-A',[ncep.(fields{aa})]);
0388             if strcmpi(src, 'reanalysis1')
0389                 timevec = datevec((data_time.time) / 24 + 365);
0390             else
0391                 timevec = datevec((data_time.time / 24) + datenum(1800, 1, 1, 0, 0, 0));
0392             end
0393 
0394             % Clip the data to the model domain
0395             data_lon = loaddap([ncep.(fields{aa}),'?lon']);
0396             % If the extents are negative in longitude, we need to extract the NCEP
0397             data_lat = loaddap([ncep.(fields{aa}),'?lat']);
0398 
0399             data_level_idx = 1;
0400             try
0401                 data_level = loaddap([ncep.(fields{aa}),'?level']);
0402                 if length(data_level.level) > 1
0403                     % Assume we've got rhum and we want humidity at the sea
0404                     % surface (since ZQQ = 0.0 in the FVCOM model namelist).
0405                     data_level_idx = find(data_level.level == 1000);
0406                 end
0407             end
0408             if isempty(data_level_idx) % default to the first
0409                 data_level_idx = 1;
0410             end
0411         end
0412 
0413         % Get the data time and convert to Modified Julian Day.
0414         scratch.time = greg2mjulian(...
0415             timevec(:,1), ...
0416             timevec(:,2), ...
0417             timevec(:,3), ...
0418             timevec(:,4), ...
0419             timevec(:,5), ...
0420             timevec(:,6));
0421         % Clip the time to the given range
0422         data_time_mask = scratch.time >= modelTime(1) & scratch.time <= modelTime(end);
0423         data_time_idx = 1:size(scratch.time,1);
0424         data_time_idx = data_time_idx(data_time_mask);
0425         if ~isempty(data_time_idx) % for the topo data mainly
0426             scratch.time = scratch.time(data_time_mask);
0427         else
0428             % Reset the index to its original size. This is for data with only
0429             % a single time stamp which falls outside the model time (as is the
0430             % case with the topography data). Only reset it when the length of
0431             % the input time is equal to 1.
0432             if length(scratch.time) == 1
0433                 data_time_idx = 1:size(scratch.time, 1);
0434             end
0435         end
0436 
0437         % Check the times
0438         %[yyyy,mm,dd,hh,MM,ss] = mjulian2greg(data.time(1))
0439         %[yyyy,mm,dd,hh,MM,ss] = mjulian2greg(data.time(end))
0440         % Get the data in two goes, once for the end of the grid (west of
0441         % Greenwich), once for the beginning (east of Greenwich), and then
0442         % stick the two bits together.
0443         clear index_lon index_lat
0444         if extents(1) < 0 && extents(2) < 0
0445             % This is OK, we can just shunt the values by 360.
0446             extents(1) = extents(1) + 360;
0447             extents(2) = extents(2) + 360;
0448             index_lon = find(data_lon.lon > extents(1) & data_lon.lon < extents(2));
0449         elseif extents(1) < 0 && extents(2) > 0
0450             % This is the tricky one. We'll do two passes to extract the
0451             % western chunk first (extents(1)+360 to 360), then the eastern
0452             % chunk (0-extents(2)).
0453             index_lon{1} = find(data_lon.lon >= extents(1) + 360);
0454             index_lon{2} = find(data_lon.lon <= extents(2));
0455         else
0456             % Dead easy, we're in the eastern hemisphere, so nothing too
0457             % strenuous here.
0458             index_lon = find(data_lon.lon > extents(1) & data_lon.lon < extents(2));
0459         end
0460 
0461         % Latitude is much more straightforward
0462         index_lat = find(data_lat.lat > extents(3) & data_lat.lat < extents(4));
0463         data.(fields{aa}).lat = data_lat.lat(index_lat);
0464 
0465         % Get the data
0466         if iscell(index_lon)
0467             data.(fields{aa}).lon = data_lon.lon(cat(1,index_lon{:}));
0468 
0469             % We need to do each half and merge them.
0470             if native_netcdf
0471                 % varidlon = netcdf.inqVarID(ncid,'lon');
0472                 % varidtime = netcdf.inqVarID(ncid,'time');
0473                 % varidlat = netcdf.inqVarID(ncid,'lat');
0474 
0475                 if strcmpi(fields{aa}, 'topo')
0476                     varid = netcdf.inqVarID(ncid, 'hgt');
0477                 else
0478                     varid = netcdf.inqVarID(ncid,(fields{aa}));
0479                 end
0480                 %[varname,xtype,dimids,natts] = netcdf.inqVar(ncid,varid);
0481                 %[~, length1] = netcdf.inqDim(ncid, dimids(1))
0482                 %[~, length2] = netcdf.inqDim(ncid, dimids(2))
0483                 %[~, length3] = netcdf.inqDim(ncid, dimids(3))
0484                 %[~, length4] = netcdf.inqDim(ncid, dimids(4))
0485                 % Dimension order is [lon, lat, level, time] or [lon, lat,
0486                 % time]. Offset indices by 1 (netcdf counts from 0).
0487                 [~, ~, dimids, ~] = netcdf.inqVar(ncid,varid);
0488                 if length(dimids) == 4
0489                     start = [min(index_lon{1}), min(index_lat), data_level_idx, min(data_time_idx)] - 1;
0490                     count = [length(index_lon{1}), length(index_lat), length(data_level_idx), length(data_time_idx)];
0491                 elseif length(dimids) == 3
0492                     start = [min(index_lon{1}), min(index_lat), min(data_time_idx)] - 1;
0493                     count = [length(index_lon{1}), length(index_lat), length(data_time_idx)];
0494                 end
0495                 data1_west.(fields{aa}).(fields{aa}) = netcdf.getVar(ncid, varid, start, count, 'double');
0496 
0497                 if length(dimids) == 4
0498                     start = [min(index_lon{2}), min(index_lat), data_level_idx, min(data_time_idx)] - 1;
0499                     count = [length(index_lon{2}), length(index_lat), length(data_level_idx), length(data_time_idx)];
0500                 elseif length(dimids) == 3
0501                     start = [min(index_lon{2}), min(index_lat), min(data_time_idx)] - 1;
0502                     count = [length(index_lon{2}), length(index_lat), length(data_time_idx)];
0503                 end
0504                 data1_east.(fields{aa}).(fields{aa}) = netcdf.getVar(ncid, varid, start, count, 'double');
0505 
0506                 data1.(fields{aa}).(fields{aa}).(fields{aa}) = ...
0507                     cat(1, data1_west.(fields{aa}).(fields{aa}), data1_east.(fields{aa}).(fields{aa}));
0508 
0509             else
0510                 % The topo needs to be handled slightly differently because its
0511                 % variable name is not the same as the prefix in the file name.
0512                 if strcmpi(fields{aa}, 'topo')
0513                     tmpvarname = 'hgt';
0514                 else
0515                     tmpvarname = fields{aa};
0516                 end
0517                 eval(['data1_west.(fields{aa}) = loaddap(''', ncep.(fields{aa}),'?',...
0518                     tmpvarname,'[', num2str(min(data_time_idx)-1),':',...
0519                     num2str(max(data_time_idx)-1), '][',...
0520                     num2str(min(index_lat)-1), ':', num2str(max(index_lat)-1),...
0521                     '][', num2str(min(index_lon{1})-1), ':',...
0522                     num2str(length(data_lon.lon)-1), ']'');']);
0523                 eval(['data1_east.(fields{aa}) = loaddap(''', ncep.(fields{aa}),'?',...
0524                     tmpvarname, '[', num2str(min(data_time_idx)-1),':',...
0525                     num2str(max(data_time_idx)-1), '][',...
0526                     num2str(min(index_lat)-1), ':', num2str(max(index_lat)-1),...
0527                     '][', '0', ':', num2str(max(index_lon{2})-1), ']'');']);
0528 
0529                 if strcmpi(fields{aa}, 'topo')
0530                     data1_east.(fields{aa}).(fields{aa}) = data1_east(fields{aa}).(tmpvarname);
0531                     data1_west.(fields{aa}).(fields{aa}) = data1_west(fields{aa}).(tmpvarname);
0532                     clear data1_east(fields{aa}.(tmpvarname) data1_west(fields{aa}).(tmpvarname)
0533                 end
0534 
0535                 % Merge the two sets of data together
0536                 structfields = fieldnames(data1_west.(fields{aa}).(fields{aa}));
0537                 for ii=1:length(structfields)
0538                     switch structfields{ii}
0539                         case 'lon'
0540                             % Only the longitude and the actual data need
0541                             % sticking together, but each must be done along a
0542                             % different axis (lon is a vector, the data is an
0543                             % array).
0544                             data1.(fields{aa}).(fields{aa}).(structfields{ii}) = ...
0545                                 [data1_west.(fields{aa}).(fields{aa}).(structfields{ii});data1_east.(fields{aa}).(fields{aa}).(structfields{ii})];
0546                         case fields{aa}
0547                             % This is the actual data
0548                             data1.(fields{aa}).(fields{aa}).(structfields{ii}) = ...
0549                                 [data1_west.(fields{aa}).(fields{aa}).(structfields{ii}),data1_east.(fields{aa}).(fields{aa}).(structfields{ii})];
0550                         otherwise
0551                             % Assume the data are the same in both arrays. A
0552                             % simple check of the range of values in the
0553                             % difference between the two arrays should show
0554                             % whether they're the same or not. If they are, use
0555                             % the western values, otherwise, warn about the
0556                             % differences. It might be the data are relatively
0557                             % unimportant anyway (i.e. not used later on).
0558                             try
0559                                 tdata = data1_west.(fields{aa}).(fields{aa}).(structfields{ii}) - data1_east.(fields{aa}).(fields{aa}).(structfields{ii});
0560                                 if range(tdata(:)) == 0
0561                                     % They're the same data
0562                                     data1.(fields{aa}).(fields{aa}).(structfields{ii}) = ...
0563                                         data1_west.(fields{aa}).(fields{aa}).(structfields{ii});
0564                                 else
0565                                     warning('Unexpected data field and the west and east halves don''t match. Skipping.')
0566                                 end
0567                             catch
0568                                 warning('Unexpected data field and the west and east halves don''t match. Skipping.')
0569                             end
0570                             clear tdata
0571                     end
0572                 end
0573             end
0574         else
0575             % We have a straightforward data extraction
0576             data.(fields{aa}).lon = data_lon.lon(index_lon);
0577 
0578             if native_netcdf
0579                 varid = netcdf.inqVarID(ncid,(fields{aa}));
0580                 % [varname,xtype,dimids,natts] = netcdf.inqVar(ncid,varid);
0581                 % [~,length1] = netcdf.inqDim(ncid,dimids(1))
0582                 % [~,length2] = netcdf.inqDim(ncid,dimids(2))
0583                 % [~,length3] = netcdf.inqDim(ncid,dimids(3))
0584                 start=[min(index_lon)-1,min(index_lat)-1,min(data_time_idx)-1];
0585                 count=[length(index_lon),length(index_lat),length(data_time_idx)];
0586                 % The air data was failing with a three long start and count
0587                 % array, so try first without (to retain original behaviour for
0588                 % other potentially unaffected variables) but fall back to
0589                 % getting only the first level (start = 0, count = 1).
0590                 try
0591                     data1.(fields{aa}).(fields{aa}).(fields{aa}) = netcdf.getVar(ncid,varid,start,count,'double');
0592                 catch
0593                     start=[min(index_lon)-1,min(index_lat)-1,0,min(data_time_idx)-1];
0594                     count=[length(index_lon),length(index_lat),1,length(data_time_idx)];
0595                     data1.(fields{aa}).(fields{aa}).(fields{aa}) = netcdf.getVar(ncid,varid,start,count,'double');
0596                 end
0597 
0598             else
0599                 eval(['data1.(fields{aa}) = loaddap(''', ncep.(fields{aa}),'?',...
0600                     fields{aa}, '[', num2str(min(data_time_idx)-1),':',...
0601                     num2str(max(data_time_idx)-1), '][',...
0602                     num2str(min(index_lat)-1), ':', num2str(max(index_lat)-1),...
0603                     '][', num2str(min(index_lon)-1), ':',...
0604                     num2str(max(index_lon)-1), ']'');']);
0605             end
0606         end
0607 
0608         datatmp = squeeze(data1.(fields{aa}).(fields{aa}).(fields{aa}));
0609         datatmp = (datatmp * data_attributes.(fields{aa}).(fields{aa}).scale_factor) + data_attributes.(fields{aa}).(fields{aa}).add_offset;
0610 
0611         % Fix the longitude ranges for all data.
0612         data.(fields{aa}).lon(data.(fields{aa}).lon > 180) = ...
0613             data.(fields{aa}).lon(data.(fields{aa}).lon > 180) - 360;
0614 
0615         if t == 1
0616             data.(fields{aa}).data = datatmp;
0617             data.(fields{aa}).time = scratch.time;
0618         else
0619             data.(fields{aa}).data = cat(3, data.(fields{aa}).data, datatmp);
0620             data.(fields{aa}).time = cat(1, data.(fields{aa}).time, scratch.time);
0621         end
0622         data.(fields{aa}).unpacked_valid_range = ...
0623             data_attributes.(fields{aa}).(fields{aa}).unpacked_valid_range;
0624         %     data.(fields{aa}).time = cat(1, data.(fields{aa}).time, squeeze(data1.(fields{aa}).(fields{aa}).time));
0625         %     data.(fields{aa}).lat = squeeze(data1.(fields{aa}).(fields{aa}).lat);
0626         %     data.(fields{aa}).lon = squeeze(data1.(fields{aa}).(fields{aa}).lon);
0627 
0628         % Replace values outside the specified actual range with NaNs. For the
0629         % majority of the variables, this shouldn't ever really generate values
0630         % of NaN since the coverage is global (land and sea). This did crop up
0631         % as a problem with the pevpr data (which is land only). In some ways,
0632         % if something fails later on (e.g. the interpolation) because there's
0633         % NaNs, that should be a wakeup call to check what's going on with the
0634         % data.
0635         if isfield(data_attributes.(fields{aa}).(fields{aa}), 'actual_range')
0636             actual_min = data_attributes.(fields{aa}).(fields{aa}).actual_range(1);
0637             actual_max = data_attributes.(fields{aa}).(fields{aa}).actual_range(2);
0638             mask = data.(fields{aa}).data < actual_min | data.(fields{aa}).data > actual_max;
0639             data.(fields{aa}).data(mask) = NaN;
0640         end
0641 
0642         if ftbverbose
0643             if isfield(data, fields{aa})
0644                 fprintf('done.\n')
0645             else
0646                 fprintf('error!\n')
0647             end
0648         end
0649     end
0650 end
0651 % Calculate the net long and shortwave radiation fluxes.
0652 if isfield(data, 'ulwrf') && isfield(data, 'uswrf') && isfield(data, 'dlwrf') && isfield(data, 'dswrf')
0653     vars = {'nswrs', 'nlwrs'};
0654     up = {'uswrf', 'ulwrf'};
0655     down = {'dswrf', 'dlwrf'};
0656     for i = 1:length(vars)
0657         % Don't overwrite the net fluxes if we already have them (for
0658         % reanalysis-1, for example).
0659         if isfield(data, vars{i})
0660             continue
0661         end
0662         data.(vars{i}).data = data.(down{i}).data - data.(up{i}).data;
0663         data.(vars{i}).time = data.(up{i}).time;
0664         data.(vars{i}).lon = data.(up{i}).lon;
0665         data.(vars{i}).lat = data.(up{i}).lat;
0666     end
0667 end
0668 
0669 % To maintain compatibility, dump the time from the last variable
0670 % loaded into a top level variable.
0671 data.time = data.(fields{aa}).time;
0672 
0673 % Now we have some data, we need to create some additional parameters
0674 % required by FVCOM.
0675 
0676 % Convert precipitation from kg/m^2/s to m/s (required by FVCOM) by
0677 % dividing by freshwater density (kg/m^3).
0678 if isfield(data, 'prate')
0679     data.prate.data = data.prate.data / 1000;
0680 end
0681 
0682 % Evaporation can be approximated by:
0683 %
0684 %   E(m/s) = lhtfl/Llv/rho
0685 %
0686 % where:
0687 %
0688 %   lhtfl   = "Mean daily latent heat net flux at the surface"
0689 %   Llv     = Latent heat of vaporization (approx to 2.5*10^6 J kg^-1)
0690 %   rho     = 1025 kg/m^3
0691 %
0692 if isfield(data, 'prate') && isfield(data, 'lhtfl')
0693     Llv = 2.5*10^6;
0694     rho = 1025; % using a typical value for seawater.
0695     Et = data.lhtfl.data/Llv/rho;
0696     data.P_E.data = data.prate.data - Et;
0697     data.P_E.lon = data.prate.lon;
0698     data.P_E.lat = data.prate.lat;
0699     data.P_E.time = data.prate.time;
0700     % Evaporation and precipitation need to have the same sign for FVCOM (ocean
0701     % losing water is negative in both instances). So, flip the evaporation
0702     % here.
0703     data.Et.data = -Et;
0704 end
0705 
0706 % Calculate the momentum flux
0707 if isfield(data, 'uwnd') && isfield(data, 'vwnd')
0708     WW = data.uwnd.data + data.vwnd.data * 1i;
0709     data.tau.data = stresslp(abs(WW),10);
0710     [data.tx.data,data.ty.data] = wstress(data.uwnd.data,data.vwnd.data,10);
0711     data.tx.data=reshape(data.tx.data*0.1, size(data.uwnd.data)); % dyn/cm^2 to N/m^2
0712     data.ty.data=reshape(data.ty.data*0.1, size(data.uwnd.data)); % dyn/cm^2 to N/m^2
0713 end
0714 
0715 % Get the fields we need for the subsequent interpolation Find the position
0716 % of a sensibly sized array (i.e. not 'topo', 'rhum' or 'pres').
0717 for vv = 1:length(fields)
0718     if ~isempty(varlist) && max(strcmp(fields{vv}, varlist)) ~= 1
0719         continue
0720     end
0721 
0722     switch fields{vv}
0723         % Set ii in each instance in case we've been told to only use
0724         % one of the three alternatively gridded data.
0725         case 'topo'
0726             ii = vv;
0727             continue
0728         case 'rhum'
0729             ii = vv;
0730             continue
0731         case {'pres', 'press'}
0732             ii = vv;
0733             continue
0734         otherwise
0735             % We've got one, so stop looking.
0736             ii = vv;
0737             break
0738     end
0739 end
0740 data.lon = data.(fields{ii}).lon;
0741 data.lon(data.lon > 180) = data.lon(data.lon > 180) - 360;
0742 data.lat = data.(fields{ii}).lat;
0743 
0744 % Create a land mask from the pevpr data (if it's been extracted).
0745 if isfield(data, 'pevpr')
0746     % Find any value less than or equal to the valid maximum across all
0747     % time steps.
0748     data.land_mask = max(data.pevpr.data <= data.pevpr.unpacked_valid_range(2), [], 3);
0749 end
0750 
0751 % Convert temperature to degrees Celsius (from Kelvin)
0752 if isfield(data, 'air')
0753     data.air.data = data.air.data - 273.15;
0754 end
0755 
0756 % Make sure all the data we have downloaded is the same shape as the
0757 % longitude and latitude arrays. This is complicated by the fact the NCEP
0758 % surface products (e.g. rhum, pres) are on a different grid from the rest
0759 % (e.g. uwnd).
0760 for aa = 1:length(fields)
0761 %     if strcmpi(fields{aa}, 'dswrf') || strcmpi(fields{aa}, 'dlwrf') || strcmpi(fields{aa}, 'uswrf') || strcmpi(fields{aa}, 'ulwrf')
0762 %         % But only if we haven't been given a list of variables to fetch.
0763 %         if nargin ~= 3
0764 %             continue
0765 %         end
0766 %     end
0767 
0768     if ~isempty(varlist) && max(strcmp(fields{aa}, varlist)) ~= 1
0769         % We've been given a list of variables to extract, so skip those
0770         % that aren't in that list
0771         continue
0772     else
0773         if isfield(data, fields{aa})
0774             [px, py] = deal(length(data.(fields{aa}).lon), length(data.(fields{aa}).lat));
0775             [ncx, ncy, ~] = size(data.(fields{aa}).data);
0776             if ncx ~= px || ncy ~= py
0777                 data.(fields{aa}).data = permute(data.(fields{aa}).data, [2, 1, 3]);
0778 
0779                 % Check everything's OK now.
0780                 [ncx, ncy, ~] = size(data.(fields{aa}).data);
0781                 if ncx ~= px || ncy ~= py
0782                     error('Unable to resize data arrays to match position data orientation. Are these data NCEP surface data (i.e. on a different horizontal grid?)')
0783                 else
0784                     if ftbverbose
0785                         fprintf('Matching %s data dimensions to position arrays\n', fields{aa})
0786                     end
0787                 end
0788             end
0789         else
0790             warning('Variable %s requested but not downloaded?', fields{aa})
0791         end
0792     end
0793 end
0794 
0795 if datenum(out.Date) > version_7_14_date
0796     netcdf.close(ncid)
0797 end
0798 
0799 % Have a look at some data.
0800 % [X, Y] = meshgrid(data.lon, data.lat);
0801 % for i=1:size(data.uwnd.data, 3)
0802 %     figure(1)
0803 %     clf
0804 %     uv = sqrt(data.uwnd.data(:, :, i).^2 + data.vwnd.data(:, :, i).^2);
0805 %     pcolor(X, Y, uv')
0806 %     shading flat
0807 %     axis('equal','tight')
0808 %     pause(0.1)
0809 % end
0810 
0811 if ftbverbose
0812     fprintf('end   : %s\n', subname)
0813 end
0814 
0815 return

Generated on Wed 20-Feb-2019 16:06:01 by m2html © 2005