Many studies have shown that distributions of the distances that offenders travel in the commission of their offences are typically characterised by a decay function. However, there are few empirical comparisons of the different mathematical functions which may characterise such distributions. Further, there has been little consideration of what different forms of function may reflect about the underlying factors and psychological processes governing this aspect of the journey to crime. With the increasing use of geographical profiling systems which incorporate decay functions into their calculations, it is particularly of value to explore the most appropriate mathematics for describing the frequencies of crime journeys and to determine the impact of different decay functions on the effectiveness of a geographical profiling system. A two-stage study was therefore carried out using data derived from 96 US serial killers. In the first stage three different decay functions were examined, in terms of the extent to which they fitted a distribution of the distances travelled to offend for the sample; logarithmic, in accordance with Steven's ‘Power Law’ for distance estimation; negative exponential as an estimate based on the ‘friction’ generated by journeys; and quadratic, which reflects key principles found from journey to crime research. A ‘control’ function, simple negative linear, was also tested against the data. It was found the logarithmic function provided the closest approximation to the journey to crime distances of offenders in the present sample (R2 = 0.81, p < 0.001), suggesting that distance estimations may be an important part of the explanation for the length of the crime trips that offenders make. In the second stage, all four functions were utilised within a geographical profiling system (Dragnet) and their impact on the search cost for locating an offender established for the whole sample. In general it was found that the search cost function, which relates the proportion of the sample to the search cost, was positively monotonic with a distinct change in gradients around 58% of the sample, indicating that the software was producing useful results in the majority of cases. However, although the logarithmic function produced the best results overall, and the linear function the worst, as hypothesised, no significant differences between the search costs were found when each of the different functions was utilised. The implications for the robustness of the software and the possible influence of the low precision of the raw data are discussed.
Restricted to Repository staff only
Download (161kB)