Threat Identification / Data Quality Support
Data Quality / ILI Review
Workflow: Data Quality / ILI Review
Data quality and ILI review is the workflow used to decide whether the inspection data are trustworthy enough to support a real engineering decision. In practice this means checking whether the reported feature, location, sizing, and classification are still inside the tool's qualified capability and whether the tool call is unified with field reality, prior runs, weld alignment, and any excavation history.
Quick scan
One-minute summary
Scan the essentials first, then open deeper sections as needed.
Overview
Data quality and ILI review is the workflow used to decide whether the inspection data are trustworthy enough to support a real engineering decision. In practice this means checking whether the reported feature, location, sizing, and classification are still inside the tool's qualified capability and whether the tool call is unified with field reality, prior runs, weld alignment, and any excavation history.
Why it matters
A report table can look precise while still carrying meaningful uncertainty. API 1163-style review matters because sizing tolerance, classification logic, tool configuration, sensor performance, speed excursions, navigation drift, analyst overrides, and partial channel loss can all change what the feature really is and how confidently it should be ranked. If the data basis is weak, the real engineering issue may be data defensibility rather than the listed anomaly dimensions.
Top concern drivers
- Tool tolerance limits and whether the intended use stays within the API 1163 qualification basis
- Sensor loss, speed excursion, lift-off, noisy channels, or other run-quality events that may degrade confidence locally
- Feature matching, weld alignment, navigation drift, or location-control uncertainty
- Rerun differences in dimensions, position, depth, or classification that are too large to explain casually
Immediate escalation cues
- Escalate when data limitations could materially change prioritization, repair timing, or whether a feature is even in the correct workflow
- Escalate when classification, location confidence, weld association, or depth confidence remains unresolved
- Escalate when run-quality issues such as speed excursion, sensor loss, or degraded channels may affect a significant feature
Practical next steps
- Confirm data quality, run quality, and alignment before making a fine-grained ranking decision
- Review prior ILI and field verification history to decide whether the current call is consistent with known field reality
- Use unity plots and validation statistics to understand how the tool behaves, but keep local outliers and run-specific issues in view
- Check whether speed excursions, sensor loss, degraded channels, or analyst reclassification could explain unusual feature behavior
Overview
Data quality and ILI review is the workflow used to decide whether the inspection data are trustworthy enough to support a real engineering decision. In practice this means checking whether the reported feature, location, sizing, and classification are still inside the tool's qualified capability and whether the tool call is unified with field reality, prior runs, weld alignment, and any excavation history.
Why It Matters
A report table can look precise while still carrying meaningful uncertainty. API 1163-style review matters because sizing tolerance, classification logic, tool configuration, sensor performance, speed excursions, navigation drift, analyst overrides, and partial channel loss can all change what the feature really is and how confidently it should be ranked. If the data basis is weak, the real engineering issue may be data defensibility rather than the listed anomaly dimensions.
Common scenarios
- A corrosion feature that looks near-threshold until the engineer notices the run had local speed excursion or degraded sensor performance
- A dent call that appears to have changed depth materially between runs, but the real issue may be tool configuration, alignment, or segmentation differences
- A crack-like indication that cannot be routed confidently because vendor notes show low confidence or analyst reclassification
- A feature that matches poorly to weld tally or dig history, raising concern that the location basis is not unified with field reality
- A cluster of anomalies where the data quality issue is not one feature but whether the local run coverage can support grouping at all
Key Concern Drivers
- Tool tolerance limits and whether the intended use stays within the API 1163 qualification basis
- Sensor loss, speed excursion, lift-off, noisy channels, or other run-quality events that may degrade confidence locally
- Feature matching, weld alignment, navigation drift, or location-control uncertainty
- Rerun differences in dimensions, position, depth, or classification that are too large to explain casually
- Vendor notes indicating analyst intervention, low confidence, reclassification, or partial data degradation
- Decisions that depend heavily on exact sizing, clock position, or weld proximity
- Weak tool-to-field unity, meaning the reported call does not line up cleanly with dig history, NDE, or prior validation
Data and Uncertainty
Core data
- Feature type and whether the reported condition is plain, interacting, or uncertain
- Depth and size information such as percent wall thickness, length, and local geometry extent
- Orientation and shape, including whether the feature is axial, circumferential, or irregular
- Reliable location information referenced to welds, bends, seams, and nearby anomalies
Context data
- Weld proximity and confirmation of girth-weld or seam association
- Pipe properties including wall thickness, grade/SMYS, diameter, and seam type
- Coating condition, environment, and any evidence of mechanical damage
- Pressure history, operating cycles, and local operating context where relevant
Advanced / situational data
- Detailed profile information for dents, strain-sensitive geometry, or irregular corrosion
- Prior ILI comparison to distinguish growth from reporting change
- Geotechnical, strain, or movement indicators if local loading may be part of the concern
- Excavation verification, NDE, UT mapping, or field observations when available
Missing or uncertain data that matters
- Missing or uncertain location control can change whether a feature is treated as plain body-pipe, weld-associated, or interacting
- Weak sizing confidence or classification uncertainty can materially limit screening quality
- Lack of prior inspection or field verification often increases the need for conservative judgment
Decision Logic
Is this feature usable for a decision, or only a clue that more reconciliation is needed?
Start by deciding whether the current call is decision-grade or whether it is only enough to trigger more review. A precise-looking table entry is not automatically decision-grade data.
Is the reported call inside the tool's qualified and trusted use case?
Check API 1163-style qualification context, vendor comments, run logs, and whether local issues such as speed excursion, sensor loss, missing channels, or degraded navigation could affect the feature.
Does the call line up with field reality strongly enough to trust it?
Compare against weld matching, prior runs, dig history, NDE, and any known validated locations. If the tool-to-field unity is weak, confidence should drop accordingly.
What do the unity plots and validation statistics actually say about this use case?
Use them to understand bias, scatter, outliers, and whether the relevant feature family behaved acceptably in validation. Do not stop at one headline accuracy number if local conditions, run issues, or feature type suggest weaker confidence.
Are run-to-run differences telling you something real or just something inconsistent?
Do not assume differences mean growth. They may reflect segmentation changes, matching drift, analyst reclassification, or different tool behavior between runs.
What is the next defensible step if confidence remains limited?
If the uncertainty could change the workflow, timing, or threat classification, move toward reconciliation, vendor clarification, specialist review, or field verification rather than forcing closure.
Methods and Frameworks
API 1163 qualification and data-use review
A structured check of whether the ILI data are being used within the tool's qualified capability, performance specification, and validation basis.
When it may be used: Useful when screening depends on classification confidence, location accuracy, sizing tolerance, or whether the run experienced abnormal conditions such as speed excursions or degraded data channels.
When it is not appropriate: Not appropriate as a substitute for field verification, operator procedures, or defect-specific engineering once the feature mechanism is already understood.
Tool-to-field correlation review
Comparison of reported feature dimensions, location, and classification against digs, NDE, prior runs, and weld/alignment controls.
When it may be used: Useful when engineers need to decide whether the inspection output is unified with field reality strongly enough to support timing, ranking, or closure.
When it is not appropriate: Not appropriate when the review stops at report tables and never checks whether the feature behavior matches validation or field evidence.
Unity plot and validation-statistics review
Use unity plots, bias checks, spread statistics, and outlier review to understand how the tool performed against field truth for depth, length, location, or classification.
When it may be used: Useful when deciding whether the inspection is behaving consistently enough for the intended decision, especially near thresholds or when field correlation is mixed.
When it is not appropriate: Not appropriate if the engineer treats summary statistics as permission to ignore local outliers, feature-family differences, or known run-quality exceptions.
API 1163 data-use and qualification review
Helps the engineer decide whether the reported feature can be used for classification, sizing, matching, and prioritization within the tool's qualified performance basis.
When it may be used: Helps the engineer decide whether the reported feature can be used for classification, sizing, matching, and prioritization within the tool's qualified performance basis.
When it is not appropriate: It does not make the defect decision for you; it only clarifies what the data can defensibly support.
Run-quality and exception review
Useful when speed excursions, sensor loss, navigation issues, or degraded channels may explain suspicious behavior in the feature list.
When it may be used: Useful when speed excursions, sensor loss, navigation issues, or degraded channels may explain suspicious behavior in the feature list.
When it is not appropriate: Exception review identifies confidence issues but does not replace field confirmation if the decision impact is material.
In-Line Inspection of Pipelines
AMPP / NACE
Why it applies: Useful for corrosion review context, inspection capability questions, and understanding tool limitations.
Key limitations: Provides broad inspection context rather than a topic-by-topic workflow for every anomaly.
In-line Inspection Systems Qualification Standard
API
Why it applies: Useful for data quality checks, feature confidence review, matching questions, and any topic driven by ILI limitations.
Key limitations: This is a qualification and use framework, not a defect-specific engineering decision tool by itself.
Pipeline Data Quality and Reconciliation Practices
Internal / Program Guidance
Why it applies: Useful for classification uncertainty, matching issues, and when decisions depend on reconciling multiple data sources.
Key limitations: Replace this placeholder with your organization’s actual SOP or governance document.
API 579
API
Why it applies: Useful as high-level fitness-for-service context when the condition needs broader damage-mechanism framing, documentation discipline, or escalation beyond simple screening.
Key limitations: It is not a pipeline integrity management rulebook and does not replace pipeline-specific methods, regulations, or company procedures.
API RP 1160
API
Why it applies: Provides integrity-management process context for anomaly prioritization, remediation planning, and defensible documentation.
Key limitations: Guidance framework only; enforceable timing comes from applicable CFR requirements and operator procedures.
PRCI research and guidance
PRCI
Why it applies: Useful when operator workflows need research-backed context on defect interaction, assessment limits, or field validation practice.
Key limitations: Research context is not itself an operating procedure or repair criterion.
- ILI review is less about one equation and more about whether the reported dimensions, location, and classification stay inside the tool's qualified performance envelope.
- Tolerance bands, matching error, sensor coverage, and run-quality issues can matter more than the reported number itself when decisions are close to a threshold.
- Unity plots and validation statistics help show bias, scatter, and outlier behavior, but they should be used to understand confidence limits, not to claim that every feature is equally trustworthy.
- The practical analytical question is whether the data are unified with field reality strongly enough to support a screening decision without additional verification.
When This Drives Field Verification
- The feature may drive a dig when uncertainty, interaction, or local context makes desktop screening alone hard to defend.
- A dig becomes more attractive when field confirmation could materially change repair timing, disposition, or specialist escalation.
- A dig or field verification may be driven less by reported severity and more by the need to unify the inspection call with field reality when tool confidence is questionable.
Field Verification Workflow
- Confirm feature location, expose the pipe safely, and compare field location to the original screening data.
- Document actual condition, including coating state, surface condition, geometry, nearby welds, and whether the reported interaction is real.
- Capture measurements, photos, and any NDE or UT needed to support disposition.
- Record whether the field findings support the reported tool dimensions, classification, and location, and note any mismatch that points back to tool performance or run quality.
Review Outcomes and Disposition
- Disposition should state whether the reported call was accepted for use, downgraded to a clue requiring more reconciliation, routed to field verification, or escalated for specialist or vendor review.
- If the final outcome depended on tool limitations, matching uncertainty, or validation gaps, document that explicitly so the next review starts from the right confidence level.
Documentation and Defensibility
- Record the run ID, tool type, vendor comments, performance specification, and any noted run-quality issues such as speed excursion, sensor loss, or degraded navigation confidence.
- Document how the reported feature was reconciled to weld matching, prior runs, dig history, and field measurements.
- State clearly what the data can support, what they cannot support, and why the final decision was still reasonable.
Practical Next Steps
- Confirm data quality, run quality, and alignment before making a fine-grained ranking decision
- Review prior ILI and field verification history to decide whether the current call is consistent with known field reality
- Use unity plots and validation statistics to understand how the tool behaves, but keep local outliers and run-specific issues in view
- Check whether speed excursions, sensor loss, degraded channels, or analyst reclassification could explain unusual feature behavior
- Escalate when uncertainty is large enough to affect what the engineer should do next or which threat workflow should be used
- Reconcile location, weld matching, and clock position before relying on the call
- Pull vendor notes, run logs, and performance documentation for the affected area
- Review unity plots and supporting statistics to decide whether the tool behavior is acceptable for the intended use
- Compare with prior runs and any dig/NDE history to test tool-to-field unity
- Route uncertain cases for specialist review rather than forcing closure on weak data
Investigation / Documentation Guidance
Identification and Location
- Record feature ID, segment, stationing or mapping reference, and nearby weld or landmark context.
- State clearly whether the feature is isolated, interacting, or still uncertain.
Data Sources
- List the ILI run, prior runs, field notes, and any supporting drawings or weld data used in the review.
- If sources disagree, record that explicitly.
- Capture run-quality issues such as speed excursions, sensor loss, missing channels, or analyst reclassification notes.
Field Verification
- If excavated, note what was observed, measured, and how it compared with the desktop interpretation.
- Include whether field measurements and location control agreed with the tool call closely enough to support future use of similar data.
Assessment Summary
- Capture the final engineering view in plain language, including what drove the response path and what uncertainty remained.
Related topics
References and Further Reading
Core applicable standards
Core Applicable Standards
Most directly relevant to this topic and commonly used to frame the main review path.
In-line Inspection Systems Qualification Standard
API
Why it applies: Useful for data quality checks, feature confidence review, matching questions, and any topic driven by ILI limitations.
What it generally addresses: Foundational guidance for understanding ILI system qualification, performance, validation, and responsible use of inspection outputs.
Limitations: This is a qualification and use framework, not a defect-specific engineering decision tool by itself.
Pipeline Data Quality and Reconciliation Practices
Internal / Program Guidance
Why it applies: Useful for classification uncertainty, matching issues, and when decisions depend on reconciling multiple data sources.
What it generally addresses: Placeholder entry for company or program-level practices covering reconciliation, validation, and data governance.
Limitations: Replace this placeholder with your organization’s actual SOP or governance document.
Supporting context
Supporting / Cross-Discipline References
Helpful when the review needs integrity-management, regulatory, or cross-discipline context beyond the primary method family.
In-Line Inspection of Pipelines
AMPP / NACE
Why it applies: Useful for corrosion review context, inspection capability questions, and understanding tool limitations.
What it generally addresses: Reference material related to selecting, planning, and interpreting in-line inspection programs.
Limitations: Provides broad inspection context rather than a topic-by-topic workflow for every anomaly.
API 579
API
Why it applies: Useful as high-level fitness-for-service context when the condition needs broader damage-mechanism framing, documentation discipline, or escalation beyond simple screening.
What it generally addresses: General FFS mindset, damage-mechanism identification, and structured assessment thinking across multiple degradation types.
Limitations: It is not a pipeline integrity management rulebook and does not replace pipeline-specific methods, regulations, or company procedures.
API RP 1160
API
Why it applies: Provides integrity-management process context for anomaly prioritization, remediation planning, and defensible documentation.
What it generally addresses: Workflow discipline, repair scheduling context, and record quality rather than defect mechanics alone.
Limitations: Guidance framework only; enforceable timing comes from applicable CFR requirements and operator procedures.
PRCI research and guidance
PRCI
Why it applies: Useful when operator workflows need research-backed context on defect interaction, assessment limits, or field validation practice.
What it generally addresses: Industry best-practice and research support for complex or uncertain conditions.
Limitations: Research context is not itself an operating procedure or repair criterion.
49 CFR Parts 192 and 195
PHMSA
Why it applies: Provide the U.S. regulatory framework that operators commonly review when anomaly evaluation, remediation, documentation, and timing decisions need to be tied back to pipeline safety rules.
What it generally addresses: High-level regulatory context for integrity management, repair timing, maintenance, evaluation, and documented response.
CSA Z662 Oil and Gas Pipeline Systems
CSA Group
Why it applies: Provides Canadian technical and program context where the operator or jurisdiction uses CSA Z662 to frame integrity, maintenance, repair, and evaluation practices.
What it generally addresses: Canadian pipeline systems context for integrity management, maintenance expectations, and defect-related technical framework.
Managing System Integrity for Hazardous Liquid Pipelines
API
Why it applies: Useful when data quality affects prioritization, remediation planning, and how the operator documents confidence limits in integrity workflows.
What it generally addresses: Integrity-management process discipline and documentation context.
Additional learning
Additional Learning Resources
Good places to deepen understanding of practical behavior, research context, and broader industry guidance.
Pipeline Research Council International (PRCI)
PRCI
Why it applies: Publishes research that helps engineers understand real-world behavior, inspection limitations, interaction effects, and emerging practices across many threat types.
What it generally addresses: Research-backed context for defect behavior, validation limits, and applied integrity practice.
PHMSA and CER public guidance resources
PHMSA / CER
Why it applies: Useful for public advisories, guidance notes, and regulator-facing context that help explain where industry attention has been focused.
What it generally addresses: Public guidance, advisories, and oversight context for integrity programs and field response.