1 - 10 of 10 Chapters
[In this brief chapter, we summarize the background knowledge needed to be able to work through the book (Sect. 1.1). After that, we provide an overview of the remainder of the book (Sect. 1.2).]
[In this chapter, we introduce the ACT-R cognitive architecture and the Python3 implementation pyactr we use throughout the book. We end with a basic ACT-R model for subject-verb agreement.]
[In this chapter, we introduce the basics of syntactic parsing in ACT-R. We build a top-down parser and learn how we can extract intermediate stages of pyactr simulations. This enables us to inspect detailed snapshots of the cognitive states that our processing models predict.]
[In the previous chapters, we introduced and used several ACT-R modules and buffers: the declarative memory module and the associated retrieval buffer, the procedural memory module and the associated goal buffer, and the imaginal buffer. These are core ACT-R modules, but focusing exclusively on...
[In this chapter, we introduce the basics of Bayesian statistical modeling. Bayesian methods are not specific to ACT-R, or to cognitive modeling. They are a general framework for doing plausible inference based on data—both categorical (‘symbolic’) and numerical (‘subsymbolic’) data.]
[The goal of ACT-R is to provide accurate cognitive models of learning and performance, as well as accurate neural mappings of cognitive activities. In this chapter, we introduce the ‘subsymbolic’ declarative memory components of ACT-R. These are essential to modeling performance, i.e., actual...
[In Chap. 4, we introduced a simple lexical decision task and a simple left-corner parser. The models we introduced in that chapter might be sufficient with respect to the way they simulate interactions with the environment, but they are too simplistic in their assumptions about memory, since...
[In this chapter, we introduce our assumptions about semantic representations and build a semantic processor, that is, a basic parser able to incrementally construct such semantic representations. Our choice for a processing-friendly semantics framework is Discourse Representation Theory (DRT,...
[In this chapter, we generalize our eager left-corner incremental interpreter to cover conditionals and conjunctions. We focus on the (dynamic) semantic contrast between conditionals and conjunctions because the interaction between these sentential operators and anaphora/cataphora provides a...
[Where do we go from here? We will keep this short because, if the reader has made it this far, the answer really is: in whatever direction the reader’s research interests lie.]
Read and print from thousands of top scholarly journals.
Continue with Facebook
Log in with Microsoft
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Sign Up Log In
To subscribe to email alerts, please log in first, or sign up for a DeepDyve account if you don’t already have one.
To get new article updates from a journal on your personalized homepage, please log in first, or sign up for a DeepDyve account if you don’t already have one.