Meetings

**__Winter 2013 Meetings__**
Our final meeting of the semester will be this Wednesday April 3. The main purpose of the meeting will be to coordinate things for next year. If you're interested in getting involved in UAIG, or want to know more about that that would entail, please come out to this meeting. As usual, we'll be meeting in BA5256 at 5pm.
 * Wednesday April 3, 2013 (5pm BA5256):**

Guest speaker: Frank Rudzicz Topic: Communicating with Machines: An Introduction to SPOClab Abstract: In this talk I introduce SPOClab (Signal Processing and Oral Communication), which bridges Computer Science at the University of Toronto with the Toronto Rehabilitation Institute. The goal of our lab is to produce software that helps to overcome challenges of communication including speech and language disorders. This will be organized into two co-dependent streams of research. First, we will embed control-theoretic models of speech production into augmented ASR systems using various machine-learning techniques. Second, these systems will be deployed in software that can be used in practice; this involves adjacent disciplines such as human-computer interaction and general natural language processing to design and study application interfaces for disabled users.
 * Wednesday March 27, 2013 (5pm BA5256):**

Guest speaker: Ilya Sutskever Topic: Image classification with convolutional neural networks Abstract: We describe an application of large convolutional neural networks to object recognition. Our network has 8 layers, 600 million connections, 60 million parameters, and 600,000 neurons, making it one of the largest neural networks ever trained. The network was trained to categorize images into 1000 lasses using the 1.2M training images of the ImageNet Large Scale Visual Recognition Challenge 2012 competition. The network was implemented on two GPUs and used a number of novel techniques to prevent overfitting. We entered a variant of this network to the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry. Additionally, the network's visual representation (which has 4096 dimensions) outperformed 128 neurons from the IT area of a macaque's visual cortex at a certain recognition task that causes other computer vision systems to fail. This is joint work with Alex Krizhevksy and Geoffrey Hinton. Slides from talk:
 * Wednesday February 13, 2013 (5pm BA5256):**

"Introductory" meeting (a little late).
 * Wednesday February 6, 2013 (5pm BA5256):**

Guest speaker: Jackie Cheung Topic: Discovering Semantic Knowledge Using Distributional Information Abstract: Mapping a sentence or some other linguistic unit to a representation of its meaning is required for many complex tasks in natural language processing. In natural language semantics, there have been two major approaches to modelling meaning. One approach uses symbolic, logical representations and their associated logical inference rules to represent and reason about the world. Another uses statistical, distributional information about the contexts in which a word or phrase appears in a large corpus of training text to model its meaning. I will show that distributional information can actually be used to discover the sort of semantic knowledge and structures used in the logical approach in two settings.
 * Wednesday January 30, 2013 (5pm BA5256):**

Guest speaker: Abdel-rahman Mohamed Topic: How do machines recognize speech? Abstract: In this talk I will introduce the field of speech processing focusing on Automatic Speech Recognition (ASR). I will describe the basic blocks of a typical ASR system then I will describe our contributions at UofT to the state-of-the-art ASR system. The algorithms we developed at UofT are the best performing ones at Google, IBM, and Microsoft research labs and are currently used in Google’s Android 4.1. Contact info: asamir [@] cs [dot] toronto [dot] edu
 * Wednesday January 23, 2013 (5pm BA5256):**

**__Fall 2012 Meetings__**
This will be our last official meeting of the semester. We'll take a break from the usual meeting format and meet at Top Sushi (just across the street on College) for dinner at the regular time, 5-6pm. We can discuss topics we covered this semester, themes (and projects!) for next semester, and just plain socialize.
 * Monday November 26, 2012 (5pm @ Top Sushi):**

This week we'll step back and take a broader look at the field of AI as a whole. We'll discuss the variety of often divergent goals AI researchers have and the different motivations and assumptions underlying different approaches. Here is a diverse list of readings/videos related to these ideas. Investigate whatever you feel is interesting, but feel free to show up at the meeting even if you haven't looked at any of them. This discussion should be pitched at such a level that no specific prerequisite knowledge is required. Read: - Bengio and LeCun, //Scaling Learning Algorithms Towards AI//, - Tom Griffith's Bayesian reading list - Tenenbaum et al.'s //How to Grow a Mind: Statistics, Structure, and Abstraction// - MIT CogNet: Computational Intelligence - "Artificial Intelligence" definition by the internet encyclopedia of philosophy Watch: - Henry Markham, director of the Blue Brain supercomputing project, gives a TED talk - Jeff Hawkins TED talk about how brain science will change computing
 * Monday November 19, 2012 (5pm BA5256):**

No meeting - fall break!
 * Monday November 12, 2012:**

Guest speaker: Charlie Tang Topic: Deep Networks for Face Recognition Abstract: Visual perception is a challenging problem in part due to illumination variations. A possible solution is to first estimate an illumination invariant representation before using it for recognition. The object albedo and surface normals are examples of such representation. In this work, we introduce a multilayer generative model where the latent variables include the albedo, surface normals, and the light source. Combining Deep Belief Nets with the Lambertian reflectance assumption, our model can learn good priors over the albedo from 2D images. Illumination variations can be explained by changing only the lighting latent variable in our model. By transferring learned knowledge from similar objects, albedo and surface normals estimation from a //single// image is possible in our model. Experiments demonstrate that our model is able to generalize as well as improve over standard baselines in //one-shot// face recognition.
 * Monday November 5, 2012 (5pm BA5256):**

Sean will lead a discussion on deep belief nets 'Required' readings: //A Fast Learning Algorithm for Deep Belief Nets//- this introduces deep belief nets and the relevant concepts related to them. It's a pretty comprehensive overview and a good paper to start with. //Learning Multiple Layers of Representation//- a less technical introduction to deep learning in general. Though many details are glossed over, it provides a good overview and is perhaps easier to read than the previous paper.
 * Monday October 29, 2012 (5pm BA5256):**

Optional reading if you want to know more about contrastive divergence: //On Contrastive Divergence Learning//

Guest speaker: Paul Grouchy Topic: Evolutionary Algorithms and Artificial Intelligence Abstract: Natural evolution has produced the most advanced intelligence discovered to date: our own. One would then expect that computer simulations of evolution could produce artificial intelligences. A variety of evolution-based programming techniques will be presented. Some of these techniques will be examples of Evolutionary Algorithms (EAs) as a form of AI, while others will showcase the power of EAs to artificially evolve neural network based AIs Slides from talk:
 * Monday October 22, 2012 (5pm BA5256):**

Intro to neural networks Read: // [|From Neural Networks to Deep Learning: Zeroing in on the Human Brain] // Watch: //The Next Generation of Neural Networks//
 * Monday October 16, 2012 (5pm BA5256):**

Introductory meeting
 * Monday October 1, 2012 (5pm BA5256):**

**__Past Meetings__**
Professor Sheila McIlraith from the Knowledge & Representation group will be presenting.
 * November 28, 2011:**

No meeting. (award reception for NSERC recipients and others)
 * November 21, 2011:**

Adam Golding on the computational modeling of preferences
 * November 14, 2011:**

Meeting in PT266, 5-6 pm. Topic: Knowledge & Representation Outline: We have two talks scheduled. See below.
 * October 31, 2011:**

Title: Plan Dispatchability: A Survey Author: Christian MuiseAbstract: In this talk we present the simple temporal network formalism, itsextensions, and the applications / solutions that have been presented in the literature. A simple temporal network is a type of plan that describes the events that must be executed, and the temporal constraints that must be satisfied during execution. The focus will be primarily on showing the consistency of temporal networks, and controllability of temporal networks with uncertainty. We will also briefly cover some of the more esoteric extensions to simple temporal networks that involve resources, preferences, and choice of subplans.
 * 5:00 – 5:30**

Title: Solving QBF: CNF and alternatives Author: Alexandra GoultiaevaAbstract: Quantified Boolean Formula (QBF) problem is a PSPACE-complete extension of satisfiability (SAT) problem that allows the formulas to have quantification. It can be used to naturally and efficiently represent problems with adversarial dynamics, such as conditional planning, as well as various problems in CAD and verification. The most widespread approach to solving QBF is having a search-based algorithm working on prenex Conjunctive Normal Form (CNF) representations. However, in the recent years it has been shown that often relaxing these constraints can be beneficial. This talk will outline the current approaches to solving QBF formulas, as well as techniques for non-CNF and for non-prenex reasoning.
 * 5:30-6:00**

BACK IN ACTION – meetings on Mondays 5-6 pm in PT266
 * October 17, 2011:**

With the following preliminary schedule: Oct. 24 – Misko Dzamba on computational biology Oct. 31 – KR Presentations Nov.7 – FALL BREAK (go read) Nov.14 – Chris Maddison on recurrent neural nets Nov.21 – Adam Golding on computational modeling of preferences

focus: knowledge and representation guest speakers: Eric Hsu and Alexandra Goultiaeva topic: SAT solving
 * March 15, 2011:**

focus: computational linguistics guest speaker: Chris Parisien Finding structure in the mire: Bayesian models of how children learn to use verbs Children are fantastic data miners. In the first few years of their lives, they discover a vast amount of knowledge about their native language. This means learning not just the abstract representations that make up a language, but also learning how to generalize that knowledge to new situations — in other words, figuring out how language is productive. Given the noise and complexity in what kids hear, this is incredibly difficult, yet still, it seems effortless. In verb learning, a lot of this generalization appears to be driven by strong regularities between form and meaning. Seeing how a certain verb has been used, kids can make a decent guess about what it means. Knowing what a verb means can suggest how to use it. In this talk, I present a series of hierarchical Bayesian models to explain how children can acquire and generalize abstract knowledge of verbs from the language they would naturally hear. Using a large, messy corpus of child-directed speech, these models can discover a broad range of abstractions governing verb argument structure, verb classes, and alternation patterns. By simulating experimental studies in child development, I show that these complex probabilistic abstractions are robust enough to capture key generalization behaviours of children and adults. Finally, I will discuss some promising ways that the insights gained from modelling child language can benefit the development of a valuable large-scale linguistic resource, namely VerbNet.
 * March 8, 2011:**

focus: computational biology guest speaker: Abe Heifets LigAlign: Flexible ligand-based active site alignment and analysis Ligand-based active site alignment is a widely adopted technique for the structural analysis of protein–ligand complexes. However, existing tools for ligand alignment treat the ligands as rigid objects even though most biological ligands are flexible. We present LigAlign, an automated system for flexible ligand alignment and analysis. When performing rigid alignments, LigAlign produces results consistent with manually annotated structural motifs. In performing flexible alignments, LigAlign automatically produces biochemically reasonable ligand fragmentations and subsequently identifies conserved structural motifs that are not detected by rigid alignment. (see readings for the full article)
 * March 1, 2011:**

Reading week. No meeting.
 * February 22, 2011:**

focus: computational cognitive science In preparation for the Distinguished Lecture Series happening earlier the same day, members are asked to choose and read a paper by Josh Tenenbaum (see Readings section). The meeting will consist of a brief overview of the talk, as well as a discussion of the ideas and concepts related to Josh Tenenbaum’s research.
 * February 15, 2011:**

focus: cognitive science Adam Golding will lead the discussion. Everyone is asked to choose an article from one of the encyclopediae listed under readings. The group discussion will target the the heterogeneity/ecclectisim/pluralism inherent in cogsci.
 * February 8, 2011:**

focus: computer vision invited talk: Pablo Sala see blurb (from Pablo) below: “The Need for Mid-Level Shape Priors in Object Categorization” Object categorization plays an important role in computer vision and image retrieval. Although a trivial task for humans, this is an extremely challenging computational problem, which remains largely unsolved. Without knowing what they are looking at, humans have the ability to organize ambiguous visual stimuli into coherent groups. This important perception mechanism involved in the early stages of the object categorization process is called “perceptual grouping”. Alghouth research in perceptual grouping was very active in the object recognition community until the mid-90s, in recent years most categorization researchers have moved to formulations of the recognition problem as object detection. However, recognition as detection does not scale to large object databases, where an informative shape index requires domain-independent (not object-specific) shape priors to drive the processes of perceptual grouping and perceptual abstraction. In this talk, I’ll present research on the problem of generic object recognition. Rather than assuming an object-level shape prior, I follow the classic formulation of the recognition problem and assume a vocabulary of compositional parts from which objects can be constructed. I’ll show an approach to group image contours into abstract 2-D parts and discuss various methods to select from among the set of generated 2-D parts, a subset of parts that provides the best interpretation of the image. Finally, I’ll explain how the selected 2-D parts can be grouped into 3-D volumes abstracting the 3-D shapes in the scene. focus: computational linguistics and NLP (same place and time as last week) Reading posted under the reading section and in drop-box. To continue our speech processing theme, we will also watch [|“words in puddles of sound”]
 * February 1, 2011:**
 * January 25, 2011:**

focus: computational linguistics and NLP We are meeting 4-5 pm in BA5256 (this will be our regular room). Please read the paper posted under the ‘readings section’. Michelle will lead the discussion on authorship attribution, as well as provide us with an introduction to computational linguistics.
 * January 18, 2011:**

focus: computational linguistics – the problems
 * January 11, 2011:**


 * December:** Break: do what you like.

- focus: computer vision – features for detection, evolution of categorization - focus: human vision – research directions
 * November 29, 2010:**
 * November 22, 2010:**

- focus: computer vision – methods - lead by Konstantine
 * November 15:** we had presentations


 * November 8, 2010:** holiday

- focus: intro to machine learning – methods - lead by Sean
 * November 1, 2010:**

- introductory meeting - group discussion - administrative issues
 * October 21:**