On formalization and formal languages. Natural language and linguistic theory, 8, 143-147. Chomsky, N. (1991a). Linguistics and cognitive science: Problems and Mysteries. In A. Kasher (ed.), The Chomskyan turn. Oxford: Blackwell.
Author: Ray C. Dougherty
Publisher: Psychology Press
This book's main goal is to show readers how to use the linguistic theory of Noam Chomsky, called Universal Grammar, to represent English, French, and German on a computer using the Prolog computer language. In so doing, it presents a follow-the-dots approach to natural language processing, linguistic theory, artificial intelligence, and expert systems. The basic idea is to introduce meaningful answers to significant problems involved in representing human language data on a computer. The book offers a hands-on approach to anyone who wishes to gain a perspective on natural language processing -- the computational analysis of human language data. All of the examples are illustrated using computer programs. The optimal way for a person to get started is to run these existing programs to gain an understanding of how they work. After gaining familiarity, readers can begin to modify the programs, and eventually write their own. The first six chapters take a reader who has never heard of non-procedural, backtracking, declarative languages like Prolog and, using 29 full page diagrams and 75 programs, detail how to represent a lexicon of English on a computer. A bibliography is programmed into a Prolog database to show how linguists can manipulate the symbols used in formal representations, including braces and brackets. The next three chapters use 74 full page diagrams and 38 programs to show how data structures (subcategorization, selection, phrase marker) and processes (top-down, bottom-up, parsing, recursion) crucial in Chomsky's theory can be explicitly formulated into a constraint-based grammar and implemented in Prolog. The Prolog interpreters provided with the book are basically identical to the high priced Prologs, but they lack the speed and memory capacities. They are ideal since anything learned about these Prologs carries over unmodified to C-Prolog and Quintas on the mainframes. Anyone who studies the prolog implementations of the lexicons and syntactic principles of combination should be able to use Prolog to represent their own linguistic data on the most complex Prolog computer available, whether their data derive from syntactic theory, semantics, sociolinguistics, bilingualism, language acquisition, language learning, or some related area in which the grammatical patterns of words and phrases are more crucial than concepts of quantity. The printed examples illustrate C-Prolog on an Ultrix Vax, a standard university configuration. The disk included with the book contains shareware version of Prolog-2 (IBM PC) and MacProlog (Macintosh) plus versions of the programs that run on C-Prolog, Quintas, Prolog-2, and MacProlog. Appendix II contains information about how to use the Internet, Gopher, CompuServe, and the free More BBS to download the latest copies of Prolog, programs, lexicons, and parsers. All figures (100+) in the book are available scaled to make full size transparencies for class lectures. Valuable special features of this volume include: * more than 100 full page diagrams illustrating the basic concepts of natural language processing, Prolog, and Chomsky's linguistic theories; * more than 100 programs -- illustrated in at least one script file -- showing how to encode the representations and derivations of generative grammar into Prolog; * more than 100 session files guiding readers through their own hands-on sessions with the programs illustrating Chomsky's theory; * a 3.5" disk (IBM Format) containing: 1. all programs in versions to run in C-Prolog or Quintas Prolog on an Ultrix Vax, and on an IBM PC and a Macintosh, 2. a shareware version of Prolog-2 for IBM PC clones which runs all programs in the book, 3. a shareware version of MacProlog for Macintosh which runs all programs in the book; * instructions on using Internet, CompuServe, and the free More BBS to download the latest copies of Prolog, programs, lexicons, and parsers; and * numerous references enabling interested students to pursue questions at greater depth by consulting the items in the extensive bibliography.
... University of Sussex 1984 Natural Language Generation from Plans : Prelim . Progress Report University of Sussex GPSG Parser UK to 1984 Contact : Roger Evans Edvard Kardelj University Ljubljana Yugoslavia SOVA parser ca 1983 Contact ...
Author: Tim Johnson
Category: Computational linguistics
We are very happy to welcome you to NLPCC 2015, the 4th International Conference on Natural Language Processing and Chinese Computing. NLPCC is the annual conference of CCF-TCCI (Technical Committee of Chinese Information, ...
Author: Juanzi Li
This book constitutes the refereed proceedings of the 4th CCF Conference, NLPCC 2015, held in Nanchang, China, in October 2015. The 35 revised full papers presented together with 22 short papers were carefully reviewed and selected from 238 submissions. The papers are organized in topical sections on fundamentals on language computing; applications on language computing; NLP for search technology and ads; web mining; knowledge acquisition and information extraction.
... knowledge to understand queries, enable semantic matching, and provide direct answers to natural language queries. ... Natural Language Computing, Data Management and Analytics, and Internet Economics and Computational Advertising.
Author: Guodong Zhou
This book constitutes the refereed proceedings of the Second CCF Conference on Natural Language Processing and Chinese Computing, NLPCC 2013, held in Chongqing, China, during November 2013. The 31 revised full papers presented together with three keynote talks and 13 short papers were carefully reviewed and selected from 203 submissions. The papers are organized in topical sections on fundamentals on language computing; applications on language computing; machine learning for NLP; machine translation and multi-lingual information access; NLP for social media and web mining, knowledge acquisition; NLP for search technology and ads; NLP fundamentals; NLP applications; NLP for social media.
Language. Computing. Based. on. Natural. Annotated. Boundary. Knowledge. Natural annotation resource consists of various resource data generated by users in all means, like web pages, forums, twitters, Wikipedia, etc.
Author: Maosong Sun
This book constitutes the refereed proceedings of the 12th China National Conference on Computational Linguistics, CCL 2013, and of the First International Symposium on Natural Language Processing Based on Naturally Annotated Big Data, NLP-NABD 2013, held in Suzhou, China, in October 2013. The 32 papers presented were carefully reviewed and selected from 252 submissions. The papers are organized in topical sections on word segmentation; open-domain question answering; discourse, coreference and pragmatics; statistical and machine learning methods in NLP; semantics; text mining, open-domain information extraction and machine reading of the Web; sentiment analysis, opinion mining and text classification; lexical semantics and ontologies; language resources and annotation; machine translation; speech recognition and synthesis; tagging and chunking; and large-scale knowledge acquisition and reasoning.
In this section we illustrate the conception of Everyday Language Computing and SFL. 1.1 Everyday Language Computing We propose the paradigm shift from the information processing with numbers and formal symbolic logic to that with our ...
Author: José Luis Vicedo
Publisher: Springer Science & Business Media
This book constitutes the refereed proceedings of the 4th International Conference, EsTAL 2004, held in Alicante, Spain in October 2004. The 42 revised full papers presented were carefully reviewed and selected from 72 submissions. The papers address current issues in computational linguistics and monolingual and multilingual intelligent language processing and applications, in particular written language analysis and generation; pragmatics, discourse, semantics, syntax, and morphology; lexical resources; word sense disambiguation; linguistic, mathematical, and morphology; lexical resources; word sense disambiguation; linguistic, mathematical, and psychological models of language; knowledge acquisition and representation; corpus-based and statistical language modeling; machine translation and translation tools; and computational lexicography; information retrieval; extraction and question answering; automatic summarization; document categorization; natural language interfaces; and dialogue systems and evaluation of systems.
This two-volume set of LNAI 12340 and LNAI 12341 constitutes the refereed proceedings of the 9th CCF Conference on Natural Language Processing and Chinese Computing, NLPCC 2020, held in Zhengzhou, China, in October 2020.
Author: Xiaodan Zhu
Publisher: Springer Nature
This two-volume set of LNAI 12340 and LNAI 12341 constitutes the refereed proceedings of the 9th CCF Conference on Natural Language Processing and Chinese Computing, NLPCC 2020, held in Zhengzhou, China, in October 2020. The 70 full papers, 30 poster papers and 14 workshop papers presented were carefully reviewed and selected from 320 submissions. They are organized in the following areas: Conversational Bot/QA; Fundamentals of NLP; Knowledge Base, Graphs and Semantic Web; Machine Learning for NLP; Machine Translation and Multilinguality; NLP Applications; Social Media and Network; Text Mining; and Trending Topics.
This book reviews the state of the art of deep learning research and its successful applications to major NLP tasks, including speech recognition and understanding, dialogue systems, lexical analysis, parsing, knowledge graphs, machine ...
Author: Li Deng
In recent years, deep learning has fundamentally changed the landscapes of a number of areas in artificial intelligence, including speech, vision, natural language, robotics, and game playing. In particular, the striking success of deep learning in a wide variety of natural language processing (NLP) applications has served as a benchmark for the advances in one of the most important tasks in artificial intelligence. This book reviews the state of the art of deep learning research and its successful applications to major NLP tasks, including speech recognition and understanding, dialogue systems, lexical analysis, parsing, knowledge graphs, machine translation, question answering, sentiment analysis, social computing, and natural language generation from images. Outlining and analyzing various research frontiers of NLP in the deep learning era, it features self-contained, comprehensive chapters written by leading researchers in the field. A glossary of technical terms and commonly used acronyms in the intersection of deep learning and NLP is also provided. The book appeals to advanced undergraduate and graduate students, post-doctoral researchers, lecturers and industrial researchers, as well as anyone interested in deep learning and natural language processing.
Study of cognates among southAsian languages for the purpose of building lexical resources. In Proceedings of National Seminar on Creation of Lexical Resources for Indian Language Computing and Processing. ACL.
Author: Bandyopadhyay, Sivaji
Publisher: IGI Global
"This book provides pertinent and vital information that researchers, postgraduate, doctoral students, and practitioners are seeking for learning about the latest discoveries and advances in NLP methodologies and applications of NLP"--Provided by publisher.
Association for Computing in Humanities, Association for Computational Linguistics and Association for Literary and Linguistic Computing. On-line at http://etext.virginia.edu/TEI.html. Spinillo, Mariangela. 2000.
Author: Gerald Nelson
Publisher: John Benjamins Publishing
Category: Language Arts & Disciplines
ICE-GB is a 1 million-word corpus of contemporary British English. It is fully parsed, and contains over 83,000 syntactic trees. Together with the dedicated retrieval software, ICECUP, ICE-GB is an unprecedented resource for the study of English syntax.Exploring Natural Language is a comprehensive guide to both corpus and software. It contains a full reference for ICE-GB. The chapters on ICECUP provide complete instructions on the use of the many features of the software, including concordancing, lexical and grammatical searches, sociolinguistic queries, random sampling, and searching for syntactic structures using ICECUP's Fuzzy Tree Fragment models. Special attention is given to the principles of experimental design in a parsed corpus. Six case studies provide step-by-step illustrations of how the corpus and software can be used to explore real linguistic issues, from simple lexical studies to more complex syntactic topics, such as noun phrase structure, verb transitivity, and voice.