Preamble

This notebook deals with compiling all available estimates and measurements of vertical nitrate fluxes (in the Arctic Ocean).

All figures exported from this notebook are prefixed by FIGURE_NO3-COMP_.

In [ ]:
%load_ext autoreload
%autoreload 2
%run imports.py

dims = dict(
    fndim = hv.Dimension('FN', label='Nitrate flux', unit='mmol m-2 d-1', range=(.005, 10)),
)

Compiling the nitrate flux database

Information was gathered into a spreadsheet mainly from each reference, with auxiliary information sometimes coming from related publications. Other information needed some additional calculations based on published data. Note that for the nitrate fluxes newly calculated for this study, auxiliary calculations have already been done in B0_new_estimates.ipynb.

Surface nitrate concentration in Nishino et al., 2018

In [ ]:
df = pd.read_csv(
    '../data/fn-compilation-database/nishino2018biogeochemical/bottle/MR150301_ex_bot.csv',
    skiprows=[0, 2],
    na_values = -999,
    parse_dates = ['DATE']
)

df[['CTDDPT','NITRAT','NITRAT2','NITRAT_AVE']].loc[
    (df.CTDDPT<=25) & 
    (dt.datetime(2015,9,23,0,0,0)<= df.DATE) &
    (dt.datetime(2015,9,23,23,59,59)>= df.DATE)
].mean()

So the average surface conc. as shown in Fig. 2b of their paper, is around 0.004 µM.

Coordinates in randelhoff2016vertical

In [ ]:
m = loadmat('/Users/doppler/database/nitrate-fluxes/results_all.mat',
           squeeze_me=True)
df = gpd.GeoDataFrame(
    dict(
        ice=m['isice'],
        fn=m['FN'],
        lon=m['lon'],
        lat=m['lat'],
        time=m['time']
    )
)
df['geometry'] = [Point(x,y) for x,y in zip(df.lon,df.lat)]
df.groupby('ice').geometry.apply(lambda x: x.unary_union.convex_hull.centroid)

randelhoff2016regional

Here, we only need to find representative positions to enter into the nitrate flux compilation CSV file.

Canada, Amundsen, Makarov Basins

Makarov Basin XCP locations in this file: ~/data/fn-compilation-database/randelhoff2016regional/deep_FN/john_canadian_basin/MMP_XCP_locations.mat
In [ ]:
# latitudes and longitudes already extracted from the file, are as follows:

lat = [84,85,86,86,87,88,89,89,90,85,86,86,86,87,87,88,72.6666666666667,73,73,74.3333333333333,74.6666666666667,74,74,75]
lon = [-135,90,-135,90,-135,90,-135,90,0,90,-136,-173,90,-137,-180,-180,-145,-140,-150,-143,-146,-140,-150,-150]

# Amundsen Basin
coords_AM = MultiPoint([Point(x,y) for x,y in zip(lon,lat) if x>0 and y>87] # NPEO ISUS
                       + [Point(x,y) for x,y in zip([-7, -10, 0], [89, 87.5, 88.5])] # NPEO MSS casts
                      ).convex_hull

# Makarov Basin
coords_MK = MultiPoint([Point(x,y) for x,y in zip(lon,lat) if x<0 and y>80] # NPEO ISUS
                       + [Point(x,y) for x,y in zip( # XCP Makarov
                           [-134.715666666667,-134.147500000000,-136.040000000000,-84.5061666666667],
                           [84.4781666666667,85.7421666666667,88.5833333333333,89.0641666666667])]
                      ).convex_hull

# Canadian Basin
coords_CB = MultiPoint([Point(x,y) for x,y in zip(lon,lat) if y<80] # NPEO ISUS
                       + [Point(x,y) for x,y in zip([-150, -150, -140, -140],[72.5, 78, 77, 72.5])] # Beaufort Gyre exploration project moorings
                      ).convex_hull
In [ ]:
coords_AM.centroid.wkt
In [ ]:
coords_MK.centroid.wkt
In [ ]:
coords_CB.centroid.wkt

Nansen Basin/Yermak Plateau

FN calculated using CALC_FN_deep.m.

nice = load('~/_WORK/_DATA/NICE/ISUS/NICE2015_ISUS_calibrated.mat')
isus = nice.isus([2:10 16]) 

data link: https://data.npolar.no/dataset/96eb41f9-c620-5fe4-a7a3-96b0e55fd3d5 as .nc file

In [ ]:
ds = xr.open_dataset('../data/fn-compilation-database/N-ICE2015-ISUS.nc')
# stations in that file that were used to calculate the nitrate fluxes:
stations = list(range(1,10))+[15,]

ds.isel(LATITUDE=stations, LONGITUDE=stations)[['LONGITUDE', 'LATITUDE']].to_dataframe().reset_index().mean()

Convert csv file into markdown table

In [ ]:
df = pd.read_csv('../data/fn-compilation.csv')
df['Reference'] = (
    df.Reference.str.startswith('this study').apply(lambda b: '@' if not b else '')
    + df.Reference
)

df = (
    df
    .assign(sorter=df.Reference.str.extract(r'(\d{4})', expand=False)+df.Reference)
    .sort_values('sorter')
    .drop(columns='sorter')
    .replace('single', 'Few measurements')
    .replace('aggregate', 'Aggregate value')
)

df_export = (
    df[['Reference', 'FN', 'Area', 'Season', 'samplesize']]
    .rename(columns=dict(
        samplesize='Sample size', 
        Nitrate_measurement='Nitrate measurement',
        Turbulence_measurement='Turbulence measurement',
        Area='Region',
    ))
)


s = pandas_df_to_markdown_table(df_export)

print(s)

Compile nitrate fluxes from the world ocean outside the Arctic

One additional calculation based on Planas et al., 1999 :

Table 1: Krho in m2 d-1, dNO3dz in mmol m-4

In [ ]:
df = pd.DataFrame(
    dict(
        Krho=[0.097, 0.015, 0.491, 0.417, 0.796, 1.13, 0.266, 1.45, 1.76, 5.84, 4.15, 14.43, 1.14, 2.59],
        dNO3dz=[0.0005, 0.0025, 0.0168, 0.0365, 0.0017, 0.049, 0.109, 0.159, 0.225, 0.268, 0.202, 0.146, 0.038, 0.030]
    )
)

df['FN'] = df.Krho * df.dNO3dz

df.mean()

Convert .xlsx to markdown table

In [ ]:
df = pd.read_csv('../data/fn-compilation-world.csv')

df['Reference'] = '@' + df.Reference
df['Year'] = df.Reference.str.extract('([0-9]{4})')
df = df.sort_values(['Year', 'Reference', 'FN'])
print(pandas_df_to_markdown_table(df[['Reference', 'FN', 'Region']]))

Comparison Arctic-worldwide nitrate fluxes

Split non-Arctic into coastal/shelf vs. open ocean

In [ ]:
# register more style options with the bokeh backend

so = ['hatch_pattern', 'hatch_scale', 'hatch_alpha', 'hatch_weight']
hv.Store.add_style_opts(hv.Area, so, backend='bokeh')
hv.Store.add_style_opts(hv.Distribution, so, backend='bokeh')
In [ ]:
df = pd.concat(
    [
        pd.read_csv('../data/fn-compilation.csv').assign(world_region='Arctic').rename(columns={'Area': 'Region'}),
        pd.read_csv('../data/fn-compilation-world.csv').assign(world_region=lambda x: 'Non-Arctic: '+x.Environment)
    ],
    sort=False,
)
df = df.loc[df.FN>1e-10]
df = df.assign(logFN=np.log10(df.FN))

options = [
    opts.Distribution(
        tools=['hover'], padding=(0, (0, 0.05)),
        frame_width=500, frame_height=250,
        show_grid=True,
        xticks=[(logx, 10**logx) for logx in range(-3, 3)], 
        fill_color=hv.Cycle(['#AA5F77', '#2DA3A2', '#D6AE4A']),
        #fill_color='w',
        hatch_scale=16,
        hatch_alpha=.3,
        hatch_pattern=hv.Cycle(['|', ' ', 'v']),
        line_width=2.5, hatch_weight=3, 
    ),
    opts.NdOverlay(legend_position='top'),
]

l = (
    df.hvplot.kde('logFN', by='world_region').opts()
    .redim(
        logFN=hv.Dimension('logFN', label='Nitrate flux', unit='mmol m⁻² d⁻¹', range=(-4, 3)),
        Density='Probability density',
    )
    .opts(*options)
)

fname = '../nb_fig/FIGURE_FN-COMP_comparison_world'
hv.save(l, fname+'.html')
hv.output(l)
l = l.opts(toolbar=None)
hv.save(l, fname+'.png')
save_bokeh_svg(l.opts(opts.Distribution(hatch_pattern=None), clone=True), fname+'.svg')
# save_bokeh_svg(l, fname+'.svg')