A Statistical Approach to Machine Translation

Cited 0 time in Web of Science Cited 0 time in Scopus

Brown, Peter F.; Stephen, John Cocke; Pietra, A. Della; Vincent, J. Della; Jelinek, Pietra Fredrick; Lafferty, John D.; Mercer, Robert L.; Roossin, Paul S.

Issue Date
서울대학교 언어교육원
어학연구, Vol.27 No.1, pp. 1-17
The field of machine translation is almost as old as the modern digital computer. In 1949 Warren Weaver suggested that the problem be attacked with statistical methods and ideas from information theory, an area which he, Claude Shannon and others were developing at the time (Weaver (1949)). Although researchers quickly abandoned this approach, advancing numerous theoretical objections, we believe that the true obstacles lay in the relative impotence of the available computers and the dearth of machine-readable text from which to gather the statistics vital to such an attack. Today, computers are five orders of magnitude faster than they were in 1950 and have hundreds of millions of bytes of storage. Large, machine-readable corpora are readily available. Statistical methods have proven their value in automatic speech recognition (Bahl et al. (1983)) and have recently been applied to lexicography (Sinclair (1985)) and to natural language processing (Baker (1979), Ferguson (1980), Garside et al. (1987), Sampson (1986)). We feel that it is time to give them a chance in machine translation.
Files in This Item:
Appears in Collections:
Language Education Institute (언어교육원)Language Research (어학연구)Language Research (어학연구) Volume 27 Number 1/4 (1991)
  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.