Publications

Detailed Information

A Dual-Mode Similarity Search Accelerator based on Embedding Compression for Online Cross-Modal Image-Text Retrieval

Cited 0 time in Web of Science Cited 1 time in Scopus
Authors

Park, Yeo Reum; Kim, Ji Hoon; Do, Jae Young; Kim, Joo Young

Issue Date
2022-05
Publisher
Institute of Electrical and Electronics Engineers Inc.
Citation
Annual IEEE International Symposium on Field-Programmable Custom Computing Machines (FCCM), pp.99-107
Abstract
Image-text retrieval (ITR) that identifies the relevant images for a given text query, or vice versa, is the fundamental task in emerging vision-and-language machine learning applications. Recently, the cross-modal approach that extracts image and text features in separate reasoning pipelines but performs the similarity search on the same embedding representation is proposed for the real-time ITR system. However, the similarity search that finds the most relevant data in huge data embeddings for a given query becomes the bottleneck of the ITR system. In this paper, we propose a dual-mode similarity search accelerator that can solve the computational hurdle for online image-to-text and text-to-image retrieval service. We propose an embedding compression scheme that removes the sparsity in the text embeddings, further eliminating the time-consuming masking operations in the later processing pipeline. Combining with the data quantization from 32-bit floating-point to 8bit integer, we reduce the target dataset size by 95.1% with less than 0.1% accuracy loss for 1024-dimensional embedding features. In addition, we propose a streamlined similarity search data flow for both query types, which minimizes the required memory bandwidth with maximal data reuse. The query and data embeddings are guaranteed to be fetched only once from the external memory with the optimized data flow. Based on the proposed data representation and flow, we design a scalable similarity search accelerator that includes multiple ITR kernels. Each ITR kernel has modular design, composed of a separate memory access module and a computing module. The computing module supports pipelined operations of the four similarity search tasks: dot product calculation, data reordering, partial score aggregation, and ranking. We double the number of processing operations in the computing module with the DSP packing technique. Finally, we implement the proposed accelerator with six ITR kernels on the Xilinx Alveo U280 FPGA card. It shows 2.98 tera operations per second (TOPS) performance at 186 MHz, achieving 526/144 and 1163/306 queries per second (QPS) performance for image-to-text and text-to-image retrieval on MS-COCO 1K/5K benchmark. It is up to 359.0x and 13.9x faster and 503.6x and 68.7x more energy-efficient than the baseline and optimized GPU implementation on Nvidia Titan RTX, respectively.
ISSN
2576-2613
URI
https://hdl.handle.net/10371/201364
DOI
https://doi.org/10.1109/FCCM53951.2022.9786159
Files in This Item:
There are no files associated with this item.
Appears in Collections:

Related Researcher

  • College of Engineering
  • Department of Electrical and Computer Engineering
Research Area AI 애플리케이션을 위한 알고리즘-시스템 공동 설계, AI-powered Big Data Management, Generative AI, Large Language Model, ML, 고성능 대규모 AI 데이터 분석 및 처리, 모달 AI

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share