Managing Data Saturation in AI Systems: A Cross-Domain Framework Integrating Human Insights and Algorithmic Verification
DOI:
https://doi.org/10.55578/isgm.2508.004Keywords:
Adaptive filtering, Contextual relevance, Crowdsourced verification, Data saturation, Feedback loops, Hybrid data integration, Real-time AI, Temporal decayAbstract
Artificial Intelligence (AI) systems are increasingly relied upon for real-time decision-making, yet high data volumes often introduce noise, inefficiencies, and performance degradation. Drawing inspiration from data saturation concepts in qualitative research, this study examined how excessive or unfiltered data compromises AI system effectiveness. The study aimed to develop a cross-domain conceptual framework that integrated human insights with algorithmic verification to manage data saturation and ensure timely, accurate, and trustworthy AI outputs. A conceptual and analytical approach was employed, combining an extensive literature review with case analyses of high-volume navigation platforms, including Waze and Google Maps. The study investigated adaptive filtering, contextual relevance scoring, hybrid verification, and temporal decay mechanisms to identify generalisable strategies for managing data saturation in AI systems. Findings show that adaptive filtering, prioritisation of contextually relevant information, hybrid verification combining human input and automated data, and temporal decay mechanisms can potentially mitigate data saturation. Lessons from Waze and Google Maps highlighted the importance of balancing human participation with computerised processes, enhancing system performance and user trust. Managing data saturation proves to be a critical operational challenge for AI systems, not merely a research consideration. The proposed framework provides a structured approach for integrating human insights and algorithmic verification, improving decision accuracy, responsiveness, and reliability across domains. The study recommends implementing adaptive data management strategies that combine human and algorithmic inputs, regularly evaluate system performance under high data loads, and adopt context-aware prioritisation techniques. Further research is suggested to explore empirical validation of the framework across diverse AI applications.
References
1. Dodda, S., Narne, S., Adedoja, T., & Ayyalasomayajula, M. M. T. (2024). AI-driven decision support systems in management: Enhancing strategic planning and execution. International Journal on Recent and Innovation Trends in Computing and Communication, 12(1), 1.
2. Ugwa, S. (2023). Artificial intelligence for IT strategy: Driving data-driven decision-making and business alignment. World Journal of Advanced Research and Reviews, 18(2), 1455–1474. https://doi.org/10.30574/wjarr.2023.18.2.0922
3. Naeem, M., Ozuem, W., & Ranfagni, S. (2024). Demystification and actualisation of data saturation in qualitative research through thematic analysis. International Journal of Qualitative Methods. Advance online publication. https://doi.org/10.1177/16094069241229777
4. Walton, P. (2018). Artificial intelligence and the limitations of information. Information, 9(12), 332. https://doi.org/10.3390/info9120332
5. Nelson, J. (2016). Using conceptual depth criteria: Addressing the challenge of reaching saturation in qualitative research. Qualitative Research, 17(5), 554–570. https://doi.org/10.1177/1468794116679873
6. Sims, I., & Cilliers, F. (2023). Qualitatively speaking: Deciding how much data and analysis is enough. African Journal of Health Professions Education, 15(1). https://doi.org/10.7196/AJHPE.2023.v15i1.1657
7. Saunders, B., Sim, J., Kingstone, T., Baker, S., Waterfield, J., Bartlam, B., Burroughs, H., & Jinks, C. (2017). Saturation in qualitative research: Exploring its conceptualisation and operationalisation. Qualitative & Quantitative Research, 52(4), 1893–1907. https://doi.org/10.1007/s11135-017-0574-8
8. Tight, M. (2023). Saturation: An overworked and misunderstood concept? Qualitative Inquiry, 30(7). https://doi.org/10.1177/10778004231183948
9. Moko, A. B., Victor-Ikoh, M., & Okardi, B. (2023). Information overload: A conceptual model. European Journal of Computer Science and Information Technology, 11(5), 19–29. https://doi.org/10.37745/ejcsit.2013/vol11n51929
10. Arnold, M., Goldschmitt, M., & Rigotti, T. (2023). Dealing with information overload: A comprehensive review. Frontiers in Psychology, 14, 1122200. https://doi.org/10.3389/fpsyg.2023.1122200
11. Trapsilawati, F., Wijayanto, T., & Septiawan Jourdy, E. (2019). Human-computer trust in navigation systems: Google Maps vs Waze. Communications in Science and Technology, 4(1), 38–43. https://doi.org/10.21924/cst.4.1.2019.112
12. Amin-Naseri, M., Chakraborty, P., Sharma, A., & Hong, M. (2018). Evaluating the reliability, coverage, and added value of crowdsourced traffic incident reports from Waze. Transportation Research Record: Journal of the Transportation Research Board, 2672(4), 1–12. https://doi.org/10.1177/0361198118790619
13. Pericolosi, B. (2022, December 14). How Google Maps Platform uses AI/ML and community contributions to keep its points of interest up to date. Google Maps Platform Blog. https://www.blog.google/products/maps/how-google-maps-platform-uses-ai-ml-and-community-contributions/
14. McIntosh, T. R., Susnjak, T., Liu, T., Watters, P., Xu, D., Liu, D., & Halgamuge, M. N. (2025). From Google Gemini to OpenAI Q (Q-Star): A survey on reshaping the generative artificial intelligence (AI) research landscape. Technologies, 13(2), 51. https://doi.org/10.3390/technologies13020051
15. Akhtar, Z. B. (2024). From Bard to Gemini: An investigative exploration journey through Google’s evolution in conversational AI and generative AI. Computing and Artificial Intelligence, 2(1), 1378. https://doi.org/10.59400/cai.v2i1.1378
16. Furner, J. (2004). Conceptual analysis: A method for understanding information as evidence, and evidence as information. Archival Science, 4(3–4), 233–265. https://doi.org/10.1007/s10502-005-2594-8
17. Gu, Y., Zhang, H., Brakewood, C., & Han, L. D. (2022). WAZE data reporting (Final Report No. RES2019-11). Center for Transportation Research, The University of Tennessee. https://rosap.ntl.bts.gov/view/dot/66932
18. Fan, X., Liu, J., Wang, Z., Jiang, Y., & Liu, X. (2017). Crowdsourced road navigation: Concept, design, and implementation. IEEE Communications Magazine, 55(6), 126–128. https://doi.org/10.1109/MCOM.2017.1600738
19. Kim, J., Jeon, W., & Kim, S. (2024). Analysing the relationship between user feedback and traffic accidents through crowdsourced data. Sustainability, 16(22), 9867. https://doi.org/10.3390/su16229867
20. Laor, T., & Galily, Y. (2022). In WAZE we trust? GPS-based navigation application users’ behavior and patterns of dependency. PLoS ONE, 17(11), e0276449. https://doi.org/10.1371/journal.pone.0276449
21. Khoo, H. L., & Asitha, K. S. (2015). User requirements and route choice response to smartphone traffic applications (apps). Travel Behaviour and Society, 3(3), 87–100. https://doi.org/10.1016/j.tbs.2015.08.004
Downloads
Published
Data Availability Statement
This study exclusively used data obtained from secondary sources through a comprehensive literature review. All referenced data are publicly accessible and have been appropriately cited within the manuscript.
Issue
Section
License
Copyright (c) 2025 Michael Mncedisi Willie (Author)

This work is licensed under a Creative Commons Attribution 4.0 International License.