Symbolic Systems 100: Introduction to Cognitive Science
Instructors:
Johan van Benthem johan<at>csli.stanford.edu Office hours: by appointment
Joan Bresnan bresnan<at>stanford.edu Office hours:
  • Tues 13:30–15:00
  • 420–022
TAs:
Alistair Isaac tokyodrifter<at>gmail.com Office hours:
  • Wed 16:00–17:00
  • 90-92A
Section:
  • Wed 17:15–18:05
  • 60-62L
Gorkem Ozbek gorkem<at>stanford.edu Office hours:
  • Thu 12:00–13:30
  • 460-040A
Section:
  • Mon 15:15–16:05
  • 160-315
Harry Tily hjt<at>stanford.edu Office hours:
  • Wed 14:30–15:30
  • 460-122
Section:
  • Tue 15:15–16:05
  • 160-322
  • Lectures: Location: Braun Auditorium, Mudd Chemistry Times: Tues/Thurs 10:00–11:30
    Grading:
    1
    Foundations
    Week 1: The Computational Model
    We start our course with a review of the 'classical model' of cognition,based on the notion of computation taken in a general sense. The idea, which goes back at least to the 17th century, is that major intelligent tasks such as reasoning involve some form of abstract computation. This idea then evolved in an interplay between several disciplines: logic, mathematics, linguistics, and later on, also computer science, artificial intelligence, and various areas in psychology. This is very close, of course, to the mixture of ingredients that went into the creation of the original Symbolic Systems program at Stanford. Especially interesting here is also the mix of natural human agents and virtual computational ones: the computational model fits all.
    In this first week, we will first show you some central aspects of this Grand Old Paradigm. On Tuesday, you will get some basic information about connections between logic and computation, with the so-called 'Turing Machine' as the major model of computation - which still holds its own against parallel machines, cellular automata, or quantum computers. (By the way, the computational model for cognition has its Patron Saint. The story of Turing's life is well-worth reading, for its combination of mathematical achievement, philosophical insight, world-changing activity, and ultimately: personal tragedy.) We will also mention some other features of the computational model, such as algorithmic structure, and complexity.
    The original mathematical models of computation were developed for abstract purposes in the foundations of mathematics. But through a twist of history, they also stood at the cradle of computer science. And we have posted a famous paper by Turing in which he already foresaw their potential for the study of intelligent human behaviour.
    The computational paradigm will return throughout this course, as we look at more specific cognitive functions, such as reasoning or learning.
    Even so, there are other major paradigms in cognitive science, as we will see next week, when experimental neurocognition comes to the fore, which tries to understand cognitive phenomena with models that are closer to the actual functioning of the human brain. This reflects a tendency toward taking experimental evidence about our behaviour and its biological base, and the constraints these put on theory, much more seriously. It also matches a stronger empirical trend in the Symbolic Systems program as it is today, which has evolved over the years in tandem with its scientific environment.
    The connection between these different paradigms in cognitive science is an interesting issue per se, and over time, interactions have ranged from open warfare to mutual respect. We have posted part of an influential text by David Marr which analyzes three levels at which computational analysis can operate, making room for both computation in a narrower sense and neurological models. An influential researcher combining such levels isJohn Anderson whose work shows a fusion of methods and concerns that is characteristic of modern cognitive science.
    But all this is looking ahead. First, on Thursday, our first guest lecturer John Perry will offer us a perspective on cognitive science relating it to fundamental debates about possible 'reductions' between Mind and Brain.
    Tuesday 4th April Speaker: Johan van Benthem Summary: DOC format
    Thursday 6th April Speaker: John Perry
    At least since Descartes, philosophers have provided arguments that the mind cannot be identical with the brain. The arguments can be divided into those that argue that a material system like the brain cannot have propositional attitudes — that is, cannot believe, desire, know and the like — and those that argue that the brain and its states cannot exhaust the nature of consciousness. I'll look at both kinds of arguments, and claim that they are not convincing.
    Reading:
    Week 2: Mind and Brain
    The computational model starts from an analogy, viz. that we are some sort of computational device, and then develops accounts of specific cognitive tasks as various sorts of computation. The mathematical paradigm behind this is discrete mathematics: logic, rule systems, grammars, classical search algorithms. And similar discrete models are ubiquitous in cognitive psychology of the classical variety, even when these differ creatively from what logicians or computer scientists had proposed. But one major development in recent decades has been an empirical turn toward reality. Given the fact that we perform all our cognitive activities with the human brain (though some poetic souls would accord some function to the human heart...), it is that brain and its powers and limitations, that places constraints on plausible theory. In recent years, experimental techniques for measuring brain function have become highly sophisticated, and our first speaker Bill Newsome has shown us just how much, in the case of vision, arguably one of our most fundamental abilities. In tandem with this experimental turn, the nature of the mathematical paradigm has shifted: models for cognitive phenomena closer to brain function tend to involve *continuous mathematics* (calculus, probability theory). Our second speaker Dan Yarlett will show us one of the most sucessful and exciting frameworks of this sorts, viz. neural nets, which are based on techniques coming from mathematical physics, rather than logic or computer science.
    This puts the second major stream of research in cognitive science.The reality of research today is that these various perspectives meet and interact. Ideas from the experimental side are percolating into the computational world, as well as linguistics and logic, while vice versa, newer ideas about logic and computation turn out to merge well with studies of neural nets, and sometimes even suggest new types of brain experiments. Next week, we will see some first illustrations of this, as we look at Logic and AI, from their standard formulations to their current state, where experimental influences,and continuous mathematics are making themselves felt.
    In class, we cannot cover all connections between everything that we present in this course. But we will occasionally post papers or links that show the current leevl of 'interconnectivity', as ideas from different paradigms meet in cognitive science. The next weeks will give you lots of exciting mixtures.
    Thursday 13th April Speaker: Dan Yarlett
    The aim of this talk is to describe the basic structure of neural networks, and to explain how this structure underpins the interesting learning behavior they exhibit and which has made them a topic of interest to researchers in a number of fields. I will begin by discussing the perceptron, and show how this class of model can be readily understood in terms of an error-driven learning framework. Iwill then discuss some of the limitations of the perceptron, and how this led to the creation, in the 1980s, of the more general feedforward network and the backpropagation learning algorithm. I will point to some of the math underlying these models, but my main goal will be to make the talk accessible to those without a mathematical background. In discussing the nature of neural networks I will also touch upon some of their applications in different fields and try to draw some general morals about the properties that make them particularly interesting to those concerned with the study of the mind.
    Reading: Slides: Powerpoint format
    Additional Materials:
    Assignment: PDF format
    2
    Reasoning and Decision
    Week 3: Logic, Search and AI
    Cognitive science is about intelligent human behavior situated in physical and social environments. As the course proceeds, we will see many instances of such intelligent behavior: including language use, vision, planning, and learning. This week, we start with one of the earliest areas where cognitive behavior has become part of a systematic scientific study. The field of logic is the study of human reasoning, an accomplishment which underlies many cognitive abilities. Already in Antiquity, it was observed that this reasoning has recurrent patterns, which can be classified for validity (or fallacy), that can then be systematized and developed further. In particular, as we have seen in Week 1, logic has had a long historical interaction with the design of reasoning machines, which eventually became our modern computers.
    On Tuesday, we will look at some logical inference patterns and the information flow that goes on when they are used. But we will go beyond 'standard logic' in two ways. First, we look at what happens when the advice from logical theory is confronted with what people actually do, as studied in the 'psychology of reasoning'. The fit is not perfect, you will see how and why. Next, we look at developments in logical theory coming from AI, where so-called 'nonmonotonic logics' were developed to better describe our problem-solving behavior, and that of machines acting 'like us'. This will show you how the various disciplines involved in cognitive science can interact and influence each other. We end with an example that you may find even more surprising. At the surface, it looks as if nothing could be further removed from a logical system of rules than the neural nets that you have seen last week as a representation of brain activity. But in reality, there is a flourishing literature these days showing that neural nets and logical inference systems are quite close - provided you take the right sort of logical systems. Alistair Isaac will tell you more about it.
    Another sort of reasoning that is often presented in opposition to 'logic' is probabilistic inference. This, too, is ubiquitous in the life of humans, and of modern computers, and largely the same points apply: theory and practice diverge, but in ways that lead to interesting further research. We will give you a little glimpse of that as well.
    With all this in place, Professor Nilsson will talk about modern AI on Thursday, showing how its ambitions have changed over time, and how the traditional logical-computational paradigm of Week 1 meets up with the experimental brain-oriented work in Week 2. One of his main points will be that human behavior can, and should, be studied atmany levels, which allow for ample interaction and cross-fertilization between approaches. You will recognize many earlier topics in his presentation, and see how they fit together in challenging research programs that are going on right now.
    Tuesday 18th April Speaker: Johan van Benthem, Alistair Isaac Summary: PDF format, part 1 / PDF format, part 2
    Thursday 20th April Speaker: Nils Nilsson Reading: Nilsson (1995): Eye on the Prize Slides: PDF format / PowerPoint format
    Additional Materials
    Assignment: PDF format
    Week 4: Decisions and Games
    This week is about a feature of cognition that is an emerging theme across many disciplines.
    Traditional cognitive science has emphasized abilities of single agents: seeing something, saying something, drawing a conclusion, or planning a trip, all by yourself. But one of the most exciting and typical features of human behavior is social interaction: we know things about each other, we learn things from each other, we argue with others, and in most of our actions, you choose a response based on what I did, and so on. In other words, a single machine, or a single brain, is just one participant in complex social cognitive phenomena. And the repertoire that we have for this is amazing:such as subtle skills of recognizing faces of other people, or of saying things that make them act in certain ways, all driven by what we know about others, or expect them to do. In fact, it has been said that our whole society is held together by 'mutual expectations'.
    On Tuesday, we will show how this interactive perspective affects computer science and logic, since information flow is really between different people, and different processors in 'muli-agent systems'. We will also show briefly how interactive views of meaning have appeared in linguistics, changing our view of what goes on in basic episodes of language use.
    But perhaps the arena par excellence for intelligent cognitive interaction is that of games. Games inspired the modern calculus of probability (the mathematicians developing it in the 17th century were thinking about gambling and betting), and later on, in the 20th century, 'game theory' was developed as an account of economic interaction, which has now found a wide array of further applications. On Thursday, professor Feinberg from the Stanford Business School will tell us about some basic themes in modern game theory - which, incidentally, has many contacts with logic, computer science, and experimental psychology these days.
    Tuesday 25th April Speaker: Johan van Benthem Handout: PDF format, part 1 / PDF format, part 2
    Assignment:PDF format
    3
    Language
    Week 5: Linguistic Competence & Processing Models
    This week we turn to natural language, the medium par excellence for human communication, and according to many philosophers, also for human thought. On Tuesday, we review the basic paradigm of 'generative grammar', describing language as a system of expressions generated by a finite set of rules. This approach is closely related to the computational and logical view of cognition, but it introduces 'fine-structure' in the so-called Chomsky Hierarchy: which runs from simple regular grammars to more powerful context-free and context-sensitive ones, and beyond. This hierarchy has a parallel in a sequence of computational devices, from finite automata through push-down store automata to Turing machines. Thus, we link up with several earlier topics: e.g., the earliest deep mathematical results on finite automata were inspired by the connection with McCullouch Pitts neural nets. From the start, cognitive considerations accompanied, and indeed inspired, linguistics in this style. We discuss issues of learnability (does the success of children in language acquisition presuppose innate grammars?), as well as the human-animal interface: do the recursive embedded structures that are so typical of generative linguistic syntax occur in animal behavior? At least, the European starlings appear to have mastered them in birdsong...
    On Thursday, we look at learning from a machine perspective, starting from the linguistic traditon and the mathematical learning theory of the 1960s, which seemed to show that learning significant families of languages from scratch was impossible. In line with current practice in computational (and even much of mainstream) linguistics, we look at large text corpora, closer in size to the exposure that human learners get. We review a number of learning methods and their success in recognizing syntactic patterns when confronted with corpus material without supervision, i.e., without a trainer telling the method what to do at first (as we saw with neural nets). The methods have by now been refined to yield high success scores on English and related languages, though Chinese remains a challenge... These learning methods are akin to those found in machine learning, a flourishing area of AI and applied computer science (machine algorithms can also learn your shopping habits at Safeway), and they heavily rely on sophisticated statistical techniques. The latter topic will return later on this month.
    Week 6: Re-Thinking Linguistic Competence, Language & Thought
    This week we took an empirical turn. On Tuesday we examined linguistic competence with data from language usage and from two experiments, which indicate first, that implicit knowledge of their language enables speakers to make accurate probabilistic predictions of the syntactic choices of others, and secondly, that linguistic manipulations to raise or lower probabilities influence grammaticality judgments, which have traditionally been the primary and privileged data for categorical grammatical models. Fresh data from the internet was discussed, as well as means of validating it by converging evidence from experiments. Students in SymSys 100 who participated as subjects in the experiment were thus "debriefed" by the lecture.
    On Thursday Lera Boroditsky examined the relation of language to other cognitive systems by reexamining Whorf's hypothesis, that the language(s) you speak influence the way you think. She presented a medley of experimental data showing the multiple ways that language is a central human cognitive system that interacts with and seems to shape all other cognitive systems, from phonetics and sexiness, through color classification experiments in English and Russian, to the mental classification of event types and linguistic tense/aspect markers.
    Her work provides a bridge to the cognitive system we will examine next week — Vision.
    Tuesday 9th May Speaker: Joan Bresnan Slides: PDF format
    Assignment: PDF format
    4
    Week 7: Vision
    We began week 7 with an introduction to the neuroscience of vision, which highlighted the integration of separate visual inferences about size, shadows, edges, and shapes into a scene. A careful examination of illusions reveals that global interpretations interact with the presence of many conflicting local properties. We then moved to some higher-order cognitive aspects of perception, including the phenomenon of change blindness. We saw, for example, that when asked to count how many times one team passed a basketball to another member of the same team in a complicated videotaped activity involving two teams simultaneously moving and passing two basketballs around a room, a significant portion of the viewers completely missed the intrusion of a gorilla into the middle of the scene. Many other examples support the overall lesson that our perception of a seamless three dimensional world is constructed from discrete attentionally selective fragments that must be temporally integrated. Change blindness reveals the gaps between our cognitive construction of the world and the fragmentary and selective perceptual basis for it.
    Thursday 18th May Speaker: Jonathan Winawer Reading: Slides:PDF format
    Assignment: Text format
    5
    Week 8: Learning
    Many studies in cognitive science are about steady-state behavior of human agents who have mastered some skill, such as language use, reasoning, or just perception. But equally important is the dynamic process of learning by which one arrives at such expert performance - and what it is that makes our mature cognitive skills learnable.
    Learning has been an issue when we looked at neural nets modeling a human brain being trained up to their desired performance. It has also been discussed in the setting of linguistics, where learnability of grammars has been used as an explanatory device in the study of human languages. But learning is a central theme in many other fields, including philosophy, statistics, computer science, and especially, psychology. And it is still gaining importance, e.g., in logic and game theory: especially, when learning is cast interactively as a multi-agent Student/Teacher activity over time.
    Out of these many approaches, this week high-lights one particular line, viz. the study of learning in computer science and AI. Professor Ng will tell us about various approaches to machine learning by algorithms, and what they can achieve to-day. Indeed, some computer scientists even entrust their lives to learning programs, witness the Dutch Robosail yacht which learns optimal trim for its sails from wind pressure data during single-handed ocean races.
    On Tuesday, before getting into all this, we have also drawn together a few strands from earlier weeks. You have seen how vision and space are closely related with the emergence of language, through the mathematical theory of transformations and invariants. Moreover, logic provided many examples of mixed practices where vision and symbolic processing come together. But, you have also seen that the precise combination in humans is not totally clear, witness the classroom experiments in Edinburgh and Stanford which showed that students are diverse in their aptitude for diagrammatic versus symbolic reasoning. This makes education a quite complex task, once the fiction of a homogeneous student group is given up. Learning again!
    Tuesday 23rd May Speaker: Johan van Benthem Handout: DOC format / PDF format
    Assignment: None this week; a learning question will be included in next week's final
    Week 9: Finale
    Tuesday 30th May Speaker: Johan van Benthem / Joan Bresnan
    Thursday 1st June Speaker: Alistair Isaac / Gorkem Ozbek / Harry Tily
    Today's session is optional, and will cover your questions about the final.