Publications

Detailed Information

In-memory database acceleration on FPGAs: a survey

DC Field Value Language
dc.contributor.authorFang, Jian-
dc.contributor.authorMulder, Yvo T. B.-
dc.contributor.authorHidders, Jan-
dc.contributor.authorLee, Jinho-
dc.contributor.authorHofstee, H. Peter-
dc.date.accessioned2024-05-02T05:59:32Z-
dc.date.available2024-05-02T05:59:32Z-
dc.date.created2024-03-20-
dc.date.created2024-03-20-
dc.date.issued2020-01-
dc.identifier.citationVLDB JOURNAL, Vol.29 No.1, pp.33-59-
dc.identifier.issn1066-8888-
dc.identifier.urihttps://hdl.handle.net/10371/200506-
dc.description.abstractWhile FPGAs have seen prior use in database systems, in recent years interest in using FPGA to accelerate databases has declined in both industry and academia for the following three reasons. First, specifically for in-memory databases, FPGAs integrated with conventional I/O provide insufficient bandwidth, limiting performance. Second, GPUs, which can also provide high throughput, and are easier to program, have emerged as a strong accelerator alternative. Third, programming FPGAs required developers to have full-stack skills, from high-level algorithm design to low-level circuit implementations. The good news is that these challenges are being addressed. New interface technologies connect FPGAs into the system at main-memory bandwidth and the latest FPGAs provide local memory competitive in capacity and bandwidth with GPUs. Ease of programming is improving through support of shared coherent virtual memory between the host and the accelerator, support for higher-level languages, and domain-specific tools to generate FPGA designs automatically. Therefore, this paper surveys using FPGAs to accelerate in-memory database systems targeting designs that can operate at the speed of main memory.-
dc.language영어-
dc.publisherSPRINGER-
dc.titleIn-memory database acceleration on FPGAs: a survey-
dc.typeArticle-
dc.identifier.doi10.1007/s00778-019-00581-w-
dc.citation.journaltitleVLDB JOURNAL-
dc.identifier.wosid000492650700001-
dc.identifier.scopusid2-s2.0-85076510155-
dc.citation.endpage59-
dc.citation.number1-
dc.citation.startpage33-
dc.citation.volume29-
dc.description.isOpenAccessY-
dc.contributor.affiliatedAuthorLee, Jinho-
dc.type.docTypeArticle-
dc.description.journalClass1-
dc.subject.keywordPlusHIGH-LEVEL SYNTHESIS-
dc.subject.keywordPlusMULTI-CORE-
dc.subject.keywordPlusCOMPRESSION-
dc.subject.keywordPlusSTORAGE-
dc.subject.keywordPlusJOINS-
dc.subject.keywordAuthorAcceleration-
dc.subject.keywordAuthorIn-memory database-
dc.subject.keywordAuthorSurvey-
dc.subject.keywordAuthorFPGA-
dc.subject.keywordAuthorHigh bandwidth-
Appears in Collections:
Files in This Item:
There are no files associated with this item.

Related Researcher

  • College of Engineering
  • Department of Electrical and Computer Engineering
Research Area AI Accelerators, Distributed Deep Learning, Neural Architecture Search

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share