Every time a detective solves a case, they link evidence back to their initial suspect. In geographical fieldwork, your conclusion does exactly this by returning to your original hypothesis or aim.
A successful conclusion must explicitly state whether the hypothesis is supported, partially supported, or rejected. AQA often asks "to what extent" your results met the aim, requiring a nuanced judgement that acknowledges any contradictory evidence.
Conclusions must be "evidenced" through Synthesis. This means bringing together different data sets and quoting specific figures, ranges, or statistical measures to prove your point (e.g., "The median pebble size decreased from 15 cm at Site 1 to 4 cm at Site 5").
You must establish a Causal Link by explaining why the data supports the conclusion. This requires linking your results to established geographical theory, such as the Bradshaw Model for physical geography or the Burgess/Hoyt models for human geography.
The Conclusion Logic Formula:
Sometimes the most valuable piece of data is the one that completely breaks your expected pattern.
An Anomaly is a result that does not fit the overall trend or expected geographical theory. For example, finding a large 15 cm boulder in the lower course of a river where the trend line suggests 2 cm pebbles.
On a scattergraph plotting Distance Downstream () against Pebble Size (), anomalies are easily identified visually as data points sitting far away from the line of best fit. Statistically, they can be flagged using the Interquartile Range (IQR), as anomalies fall significantly outside the upper or lower quartiles.
The impact of an anomaly depends heavily on your sample size. In small samples, anomalies are highly influential and can skew your mean, leading to tentative conclusions. Large samples of 50 or more help "smooth out" the impact of irregular results.
When dealing with familiar fieldwork, you must explain why the anomaly occurred. This could be due to a localised bank collapse, a sudden gust of wind affecting equipment, or simple human miscounting.
If another student repeated your exact fieldwork tomorrow, would they get the exact same results?
When evaluating your enquiry, you must provide a balanced assessment of its strengths and weaknesses. AQA distinguishes between Accuracy, which is how close a measurement is to the true value, and limitations of scale, which refer to sample size.
Reliability refers to the consistency of your results and whether they are repeatable. It is often compromised by small sample sizes or unpredictable random errors.
Validity questions whether your data actually measures what was intended. For example, a pedestrian count on a Monday morning has low validity for measuring how "busy" a town is overall, because it does not represent typical behaviour.
To correctly evaluate your enquiry, you must provide a definitive judgement that weighs strengths against weaknesses. Here are worked examples modelling this balanced approach:
Most GCSE fieldwork suffers from the "snapshot in time" limitation. Taking measurements on a single dry day limits the validity of your conclusions regarding long-term, annual river discharge processes.
Data that relies on personal opinion, like Environmental Quality Surveys, introduces Subjectivity. This lowers overall validity due to Operator error/bias.
Even with the best planning, fieldwork is messy and measurements are rarely perfect.
Errors generally fall into two categories. A Random Error is an unpredictable mistake made by chance, such as misreading a stopwatch. A Systematic Error is a consistent mistake in every measurement, like using a tape measure stretched by 2 cm or a flow meter that wasn't zeroed.
You must also identify Sampling Bias, which occurs when a sample is non-representative (e.g., pragmatic sampling by only surveying friendly-looking people). This bias can be fixed using Stratified Sampling, which divides a population into subgroups to ensure all demographics or geographical zones are represented.
Equipment limitations also frequently cause errors. Reading a ruler at an angle causes a Parallax Error. Furthermore, using simple floats (like oranges) to measure river velocity only captures surface speed and is highly affected by wind, limiting reliability.
When asked to evaluate data collection errors in the exam, use the Three-Step Appraisal framework to ensure you structure your evaluation to cover the error, its effect, and the solution:
Understanding what went wrong in your fieldwork is only half the battle; examiners want to know how you would fix it next time.
Quantitative improvements involve increasing your sample size (e.g., from 10 to 50) to improve reliability and reduce the influence of random errors and anomalies.
Methodological improvements focus on increasing accuracy by upgrading equipment. For example, swapping a standard ruler for digital callipers, or a float for a digital flow meter.
Temporal improvements boost validity by repeating the study at different times. Conducting surveys across different seasons, or on both weekdays and weekends, gives a much more representative picture.
Finally, primary "snapshot" data can be evaluated against secondary data sources. Checking your fieldwork results against Environment Agency flood maps or Census data helps verify long-term trends and improves the overall reliability of your conclusions.
Students often evaluate their fieldwork by saying it was "successful" because they completed it. Examiners want you to evaluate if the results were "reliable" or "representative", not whether you had a nice day out.
In 6-mark or 9-mark 'Evaluate' questions, you must link a specific error to its direct impact on your final conclusion (e.g., "A stretched tape overestimated the cross-sectional area, making the velocity-discharge link less reliable").
Avoid generic "geography of anywhere" answers by using place-specific evidence (e.g., quoting a specific site number or exact data value) to prove you actually conducted the enquiry.
Do not suggest "proximity to school" as a strength of your fieldwork location; AQA requires geographical reasons for site suitability.
When suggesting improvements, vague answers like "take more samples" will only score Level 1. Always be specific: "Increase the sample size to 50 to reduce the impact of random errors."
Synthesis
Bringing together different data sets to form a single cohesive conclusion.
Causal Link
A logical explanation of the processes that caused the observed pattern, often rooted in geographical theory.
Anomaly
A data point that deviates significantly from the rest of the dataset or the expected pattern.
Interquartile Range (IQR)
A statistical measure representing the middle 50% of data, used to mathematically identify anomalies falling outside the upper or lower quartiles.
Accuracy
The degree to which a measurement reflects the true value.
Reliability
The extent to which results are consistent and repeatable.
Validity
The suitability of the method to answer the specific enquiry question.
Subjectivity
Data collection that relies on personal opinion rather than objective measurement.
Operator error/bias
Errors or inconsistencies introduced by the person taking the measurement, often linked to subjective surveys.
Random Error
Unpredictable mistakes made by chance during data collection, which can be reduced by taking averages.
Systematic Error
A consistent error present in every measurement, often due to faulty equipment that has not been calibrated.
Sampling Bias
An error that occurs when a sample chosen is non-representative of the wider population.
Stratified Sampling
Dividing a population into subgroups to ensure all are proportionally represented, reducing sampling bias.
Parallax Error
A human error caused by viewing a measurement scale at an angle rather than straight on.
Three-Step Appraisal
A framework for structuring an evaluation of data collection errors by identifying the error, assessing its impact on the conclusion, and suggesting an improvement.
Put your knowledge into practice — try past paper questions for Geography
Synthesis
Bringing together different data sets to form a single cohesive conclusion.
Causal Link
A logical explanation of the processes that caused the observed pattern, often rooted in geographical theory.
Anomaly
A data point that deviates significantly from the rest of the dataset or the expected pattern.
Interquartile Range (IQR)
A statistical measure representing the middle 50% of data, used to mathematically identify anomalies falling outside the upper or lower quartiles.
Accuracy
The degree to which a measurement reflects the true value.
Reliability
The extent to which results are consistent and repeatable.
Validity
The suitability of the method to answer the specific enquiry question.
Subjectivity
Data collection that relies on personal opinion rather than objective measurement.
Operator error/bias
Errors or inconsistencies introduced by the person taking the measurement, often linked to subjective surveys.
Random Error
Unpredictable mistakes made by chance during data collection, which can be reduced by taking averages.
Systematic Error
A consistent error present in every measurement, often due to faulty equipment that has not been calibrated.
Sampling Bias
An error that occurs when a sample chosen is non-representative of the wider population.
Stratified Sampling
Dividing a population into subgroups to ensure all are proportionally represented, reducing sampling bias.
Parallax Error
A human error caused by viewing a measurement scale at an angle rather than straight on.
Three-Step Appraisal
A framework for structuring an evaluation of data collection errors by identifying the error, assessing its impact on the conclusion, and suggesting an improvement.