Information Retrieval Systems for Efficient Multimedia Information Access
Authors/Creators
Description
An Information Retrieval System (IRS) is designed to store, organize, retrieve, and maintain information in response to user queries. Unlike traditional database systems that rely on structured data and exact matching, an IRS focuses on retrieving relevant information from large collections of unstructured or semi-structured data such as text, images, audio, video, and other multimedia content. With the rapid growth of the Internet and advances in low-cost computing and storage technologies, information retrieval systems have become essential tools for managing vast digital repositories and enabling efficient access to knowledge. The primary objective of an IRS is to reduce the user’s effort in locating needed information. This effort, known as information retrieval overhead, includes query formulation, execution, examination of retrieved results, and reading non-relevant items. To evaluate system effectiveness, two key performance measures are used: precision, which reflects the accuracy of retrieved results, and recall, which measures the completeness of retrieval. A balance between these measures is crucial for effective information access. Modern information retrieval systems support natural language queries, allowing users to express their information needs in everyday language. Internally, an IRS operates through several functional processes, including item normalization, selective dissemination of information, document database search, and index database search. Item normalization converts diverse data formats into standardized, searchable representations through processes such as zoning, token identification, and stop-word removal. Indexing and automatic file-building techniques further enhance retrieval efficiency.
Files
Information Retrieval Systems for Efficient Multimedia Information Access.pdf
Files
(952.1 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:8f61ab32f98d812ada5f3c0cb4c599ec
|
952.1 kB | Preview Download |
Additional details
Dates
- Submitted
-
2026-01-23An Information Retrieval System is a system designed for the storage, retrieval, and maintenance of information. In this context, information may consist of text (including numerical and date data), images, audio, video, and other multimedia elements. Modern techniques are emerging to support searching within these media types, such as EXCALIBUR's Visual RetrievalWare and VIRAGE video indexer. The term item refers to the smallest complete unit that the system can process and manipulate. The meaning of an item depends on how a specific source organizes information. A complete document such as a book, newspaper, or magazine may be treated as an item. Similarly, a video news program may be considered an item, consisting of multiple components such as closed-caption text, audio spoken by presenters, and the corresponding video frames.
References
- 1. Salton, G., Department of Computer Science, Cornell University. A Theory of Indexing. Journal of the American Society for Information Science, Vol. 24, No. 3, pp. 161–170, 1973. 2. Salton, G., McGill, M. J., Cornell University. Introduction to Modern Information Retrieval. McGraw-Hill International Book Company, New York, 1983. 3. Manning, C. D., Raghavan, P., Schütze, H., Stanford University. An Introduction to Information Retrieval. Cambridge University Press, Cambridge, 2008. 4. Baeza-Yates, R., Ribeiro-Neto, B., University of Chile. Modern Information Retrieval. ACM Press / Addison-Wesley, 1999. 5. Van Rijsbergen, C. J., University of Glasgow. Information Retrieval. Butterworth-Heinemann, London, 1979. 6. Robertson, S. E., Microsoft Research Cambridge. The Probability Ranking Principle in IR. Journal of Documentation, Vol. 33, No. 4, pp. 294–304, 1977. 7. Sparck Jones, K., University of Cambridge. A Statistical Interpretation of Term Specificity. Journal of Documentation, Vol. 28, No. 1, pp. 11–21, 1972. 8. Belkin, N. J., Croft, W. B., Rutgers University. Information Filtering and Information Retrieval. Communications of the ACM, Vol. 35, No. 12, pp. 29–38, 1992. 9. Croft, W. B., University of Massachusetts Amherst. Combining Approaches to Information Retrieval. Advances in Information Retrieval, Springer, pp. 1–36, 2000. 10. Lawrence, S., Giles, C. L., NEC Research Institute. Accessibility of Information on the Web. Nature, Vol. 400, No. 6740, pp. 107–109, 1999. 11. Deerwester, S., Dumais, S. T., Furnas, G. W., Landauer, T. K., Harshman, R., Bell Laboratories. Indexing by Latent Semantic Analysis. Journal of the American Society for Information Science, Vol. 41, No. 6, pp. 391–407, 1990. 12. Hearst, M. A., University of California, Berkeley. Improving Full-Text Precision on Short Queries. Proceedings of SIGIR, ACM, pp. 348–354, 1996. 13. Buckley, C., Voorhees, E. M., Cornell University / NIST. Evaluating Evaluation Measure Stability. Proceedings of SIGIR, ACM, pp. 33–40, 2000. 14. Voorhees, E. M., National Institute of Standards and Technology (NIST). The TREC Test Collections. Journal of the American Society for Information Science, Vol. 53, No. 2, pp. 134–146, 2002. 15. Zhai, C., Lafferty, J., Carnegie Mellon University. A Study of Smoothing Methods for Language Models. ACM Transactions on Information Systems, Vol. 22, No. 2, pp. 179–214, 2004. 16. Faloutsos, C., Carnegie Mellon University. Searching Multimedia Databases by Content. Kluwer Academic Publishers, 1996. 17. Chang, S.-F., Sikora, T., Puri, A., Columbia University. Overview of MPEG-7. IEEE Transactions on Circuits and Systems for Video Technology, Vol. 11, No. 6, pp. 688–695, 2001. 18. Smith, J. R., Chang, S.-F., Columbia University. VisualSEEk: A Fully Automated Content-Based Image Query System. Proceedings of ACM Multimedia, pp. 87–98, 1996. 19. Witten, I. H., Moffat, A., Bell, T. C., University of Waikato. Managing Gigabytes: Compressing and Indexing Documents and Images. Morgan Kaufmann, 1999. 20. Grossman, D. A., Frieder, O., Illinois Institute of Technology. Information Retrieval: Algorithms and Heuristics. Springer, Dordrecht, 2004. 21. Rijsbergen, C. J. van, University of Glasgow. A Non-Classical Logic for Information Retrieval. The Computer Journal, Vol. 29, No. 6, pp. 481–485, 1986. 22. Harman, D., National Institute of Standards and Technology (NIST). Overview of the First TREC Conference. Proceedings of TREC, NIST Special Publication, pp. 1–19, 1993. 23. Borgman, C. L., University of California, Los Angeles. From Gutenberg to the Global Information Infrastructure. MIT Press, 2000. 24. Saracevic, T., Rutgers University. Relevance: A Review of the Literature. Journal of the American Society for Information Science, Vol. 26, No. 6, pp. 321–343, 1975. 25. Jain, A. K., Murty, M. N., Flynn, P. J., Michigan State University. Data Clustering: A Review. ACM Computing Surveys, Vol. 31, No. 3, pp. 264–323, 1999. 26. Hersh, W., Oregon Health & Science University. Information Retrieval: A Health and Biomedical Perspective. Springer, New York, 2009. 27. Borlund, P., Royal School of Library and Information Science, Denmark. The Concept of Relevance in IR. Journal of the American Society for Information Science and Technology, Vol. 54, No. 10, pp. 913–925, 2003.