QUB | Archaeology and Palaeoecology | The 14Chrono Centre

- Introduction
- Radiocarbon ages
- Radiocarbon calibration
- Depth-age modelling
- Rate of change
- Implementation
- References

- For most palaeoecological applications, the units of time are years (yr), thousands of years, etc. Different units are sometimes used to indicate absolute ages (5ka: 5000 years ago) or durations (5kyr: an interval lasting 5000 years).
- `Years' may be calendar years, or some other unit closely related to, but not exactly the same as, calendar years. The most common non-calendar year unit is `radiocarbon years'. The length of a radiocarbon year differs from a calendar by a variable amount, resulting in the need for calibration: see below.
- Year zero: ages may be expressed as years AD or BC, or they be
expressed relative to a zero year. By convention, most
palaeoecological ages are given as `years BP', where `BP' is
defined as AD 1950 (chosen because it predates distortion of
^{14}C abundances in the upper atmosphere by nuclear weapons testing).

- Conventional (bulk) radiocarbon ages. This method of
determining the age of a material works by counting decays from
^{14}C atoms. The number of decays over a known period of time enables the^{14}C content of the material to be determined. The^{12}C content is determined by other means, and the departure of the^{12}C:^{14}C ratio from the equilibrium value enables calculation of the age of the material. - Accelerator (AMS) radiocarbon ages. This method works by counting
the number of
^{12}C and^{14}C atoms directly in a mass spectrometer, and then obtaining the age of the material from the ratio, as with the conventional method. - Errors. There is no difference, from a data-handling point of
view, in the errors obtained by AMS and conventional methods. Errors given
with radiocarbon ages are obtained by methods that vary between laboratories.
A conventional radiocarbon age is based on counts of decaying 14C atoms. Some
laboratories assume that the decays are distributed as Poisson processes. In
this case, var(N) = N, where N is the number of observed decays. Thus, the
coefficient of variation is smaller for larger samples, longer counting times,
and younger samples. Other laboratories count the sample in a series of short
time periods (e.g. 100 minutes), and calculate a standard error of the mean
from this series. A standard error of the mean is equal to
*s*/(sqrt[n]), where*s*is the sample standard deviation, and n is the number of individual counts in the series. This statistic cannot be interpreted without knowing the value of*n*, but*n*is not usually reported. Radiocarbon ages obtained by accelerator mass spectrometry (AMS) depend upon counts of^{14}C atoms arriving at a detector, which may be assumed to be a Poisson process, as for the detection of decaying^{14}C atoms in the conventional process. However, the errors on AMS ages depend also upon complex laboratory factors. I shall assume that the quoted standard deviation for any radiocarbon age can be treated as a sample standard deviation. This is obtained from measurements, of course, and is an estimate of the true, population, standard deviation.

One, albeit crude, approach to this problem is take the mid-point between the pair of calendar ages that enclose the 95% confidence interval as an estimate of the calendar age of the sample, and half the distance between this age and either of the ages marking the confidence interval as the standard deviation of that estimate.

Two approaches to radiocarbon calibration have been developed. Classically,
radiocarbon ages are calibrated by comparison of the age determination with
the current accepted calibration curve (of which the latest version
is INTCAL98 [Stuiver *et al*. 1998]). Alternatively, ages may be
calibrated by Bayesian statistics, which means incorporating other
information, appropriately weighted, to constrain the calibration. For example,
if a sample is known to younger than a tephra layer with a known exact
age, this information can, and should, be used. BCal,
an online calibration system, has been developed to do exactly this.

Definition of terms:

- Deposition time (DT)
- time elapsed during
accumulation of unit sediment thickness (units: (radiocarbon years) cm
^{-1}) - Microfossil deposition rate (MDR)
- microfossils incorporated
into the sediment on a unit area of lake bottom each radiocarbon year
(grains cm
^{-2}[radiocarbon year]^{-1}) - Sedimentation rate (SR)
- inverse of deposition time

**Linear interpolation**This is the most frequently used age-depth model, and the most obvious and basic way to start. Reported radiocarbon ages are plotted against depth with the points connected by straight lines (often necessitating extrapolation to the base of the sequence). Estimates of DT are found from the gradients between adjacent pairs of points, and interpolated ages read off (or calculated) for intermediate depths. It is a superficially crude approach, but does provide reasonable estimates for both ages and gradients. However, it takes no account of the errors on the radiocarbon ages, and it turns out to be inadequate when confidence intervals on ages and slopes are obtained. Note also that the gradient will normally change at every radiocarbon age, which is far from necessarily being a reasonable reflection of what really happens as basins infill.**Spline interpolation**A spline is a polynomial (see below) fitted between each pair of points, but whose coefficients are determined slightly nonlocally: some information is used from other points than the pair under immediate consideration. This nonlocality is intended to make the fitted curve smooth overall, and not change gradient abruptly at each data point. The usual polynomial fitted between pairs of points is a cubic (4-term) polynomial, producing a cubic spline. This method also takes no account of the errors on the radiocarbon ages, and can produce 'ruffle-like' bends that include sections with negative DT.**Polynomial line-fitting**Polynomials with the following form are fitted to the data:

*y = a + bx + cx*etc^{2}+ dx^{3}

where

*x*= depth (independent variable),*y*= age (dependent variable),*a*,*b*,*c*,*d*, etc are coefficients that must be estimated.

Polynomials may be considered by the number of terms they include:

*y = a + bx*has 2 terms, and is a straight line

*y = a + bx + cx*has 3 terms and is a quadratic curve^{2}

*y = a + bx + cx*has 4 terms and is a cubic curve etc.^{2}+ dx^{3}

The gradients of these curves for any depth x can be
differentiated to obtain *dy/dx*, the rate of change of *y* at *x*.

If *y = a + bx*, then *dy/dx = b* (constant gradient for all x
values)

If *y = a + bx + cx ^{2}*, then

If

etc.

Thus, a straight line regression can be seen as a polynomial that has just 2 terms.

The idea of fitting a curve is to find a line that is a reasonable model of
the data points. The curve does not necessarily have to pass through all the
points because the points are only statistical estimates of the 'true'
(unknown) radiocarbon age of the sample. For *y = a + bx*, we need to find
values for *a* and *b* such that values of *y* calculated from
the line at each *x* are as close as possible to the observed values of
*y*. "As close as possible" can be defined in many ways, of
which the most usual is 'least-squares'. This means minimising the sum of the
squared distances for the dependent variable. The errors on the radiocarbon
ages are incorporated as weighting on the dependent variable. It will normally
be appropriate to include an age and error estimate for the top sample of a
sequence (use -50+/-50). The procedure for polynomials with
more terms is conceptually identical, but the arithmetic for finding *a*,
*b*, *c*, etc becomes more complex.

The coefficients obtained enable a curve to be plotted and gradients to be calculated by differentiation. Curves become more 'flexible' with more terms. We want to use a polynomial that is as simple as possible (few terms), but is still a 'reasonable' fit.

Goodness-of-fit may be assessed from Chi-squared. The squared distances from the dependent variable to the fitted curve are weighted by the squared errors on each age, and summed. This approach assumes that the quoted errors on the radiocarbon ages are the population values. In practice, they are sample values from one measurement exercise, and will tend to be slightly too small as estimators of the population value. The Chi-squared obtained is zero for a perfect fit (i.e. the fitted curve passes through all the given data points), and this will always occur when the number of terms is equal to the number of data points. The Chi-squared value may be assessed from tables or analytically, for its size is a function of the number of ages, the standard deviations of the ages, and the number of terms in the polynomial, to provide a measure of 'goodness-of-fit'. This measure is the probability that the observed difference between the fitted curve and the data points could have been obtained by chance if the fitted curve was the 'correct' solution. Thus, ideally, the goodness-of-fit should exceed 0.05, but values as low as 0.001 may, with caution, be acceptable. Neither 2 nor the 'goodness-of-fit' measure can make any judgement about the course of the fitted curve between or beyond the given points: assessment of this remains a matter for the analyst to explore. The goodness-of-fit will be unacceptably low if one or more of the following conditions holds:

- the model is wrong (the polynomial is a poor statement of the way that sediment has accumulated over time);
- the errors on the radiocarbon ages are too small;
- the errors on the radiocarbon ages are not normally distributed.

The calculation of age estimates and deposition times (gradients) through any of these age-depth models is straightforward, and the results are taken as means of a distribution to be found by simulation. Ages and gradients are obtained for each depth of interest, usually the location of each pollen sample in the sequence. The simulation is then carried out by drawing random numbers to simulate the radiocarbon ages, plus a value for the surface samples. Random numbers are drawn from normal distributions of zero mean and unit variance, then each is multiplied by the error of the age being simulated, and added to the reported value for the age. One random number is drawn for each age, then the age-model is fitted and estimates obtained for the age and gradient at each depth of interest. These values are then accumulated through a series of simulations (100, for example), and the sample standard deviation for each age and gradient can then be found. This is a simple process to implement in any program that is calculating age-depth relationships (whether the models outlined above, or others), since it involves only the drawing of random numbers for a given distribution, and looping through the modelling part of the program while accumulating results. For 100 simulations, the standard error of the mean is 1/(sqrt[100]) = 0.1 of the sample standard deviation, and the standard error of the sample standard deviation is 0.5(sqrt[2])/(sqrt[100]) = 0.071 of the sample standard deviation. It is possible to check that the mean and standard deviation of each simulated distribution is close to the observed mean and standard deviation and consistent with these error estimates. Simulation results are used to calculate standard deviations of gradients and interpolated ages, but the gradients and ages themselves are derived from the observed radiocarbon ages.

where:

so

In this approach, we take the time as being the interval between the age of any pair of samples for which we have a dissimilarity measure. Another approach is to smooth the sequence and interpolate to constant time intervals, then to calculate the dissimilarity measures and divide by the time interval.

- General
- Radiocarbon web-info. Includes links to all aspects of dating, calculation, and calibration.

- Calibration programs
- CALIB 4.12 available for DOS, Windows 95/98/NT, Linux, and Apple Macintosh.
- Oxcal 3.3 for Windows 95/98/NT (an earlier version for Windows 3.1 is still available).
- BCal: on-line Bayesian radiocarbon calibration tool.

- Depth-age modelling
**psimpoll**offers depth-age modelling using linear interpolation, splines, and polynomial line fitting. Confidence intervals can be calculated on age determinations and sediment accumulation rates. Rates of change may be calculated and samples can be interpolated to constant intervals. Exact calculation of deposition times.**Tilia**offers depth-age modelling using linear interpolation, spline interpolation, and polynomials. Rates of change and interpolation are possible. Deposition times are calculated as tangents to fitted curves, rather than exactly.**POLSTA**offers depth-age modelling using linear interpolation or power curves. Built in algebra routines enable more complex models.- DEP-AGE, by L.J. Maher, Jr, provides depth-age modelling using linear interpolation, cubic splines, exponential functions, power functions, and best-fit polynomials. Details in INQUA Data-handling Newsletter.

Bennett, K.D. & Humphry, R.W. 1995. Analysis of late-glacial and
Holocene rates of vegetational change at two sites in the British Isles.
*Review of Palaeobotany and Palynology*, **85**, 263-287, 1995.

Pilcher, J.R. 1991. Radiocarbon dating for the Quaternary scientist.
*Quaternary Proceedings* **1**, 27-33.

Prentice, I.C. 1980. Multidimensional scaling as a research tool in
Quaternary palynology: a review of theory and methods. *Review of
Palaeobotany and Palynology*, **31**, 71-104.

Stuiver, M., Reimer, P.J., Bard, E., Beck, J.W., Burr, G.S., Hughen, K.A.,
Kromer, B., McCormac, F.G., v. d. Plicht, J., and Spurk, M., 1998. INTCAL98
Radiocarbon age calibration 24,000 - 0 cal BP. *Radiocarbon* **40**,
1041-1083.

Copyright © 1999 K.D. Bennett

Archaeology and Palaeoecology | 42 Fitzwilliam St | Belfast BT9 6AX | Northern Ireland | tel +44 28 90 97 5136

Archaeology and Palaeoecology | The 14Chrono Centre | URL http://www.qub.ac.uk/arcpal/ | WebMaster