Artists build on top of science. Today’s cutting edge math and computer science research becomes tomorrow’s breakthrough creative projects.

Computer vision algorithms, machine learning techniques, and 3D topology are becoming vital prerequisites to doing daily work in creative fields from interactive art to generative graphics, data visualization, and digital fabrication. If they don’t grapple with these subjects themselves, artists are forced to wait for others to digest this new knowledge before they can work with it. Their creative options shrink to those parts of this research selected by Adobe and Autodesk for inclusion in prepackaged tools.

This class is designed to help you start seizing the results of this research for your own creative work. You’ll learn how to explore the published academic literature for techniques that will help you build your dream projects. You’ll learn how to apply these techniques across multiple media. And you’ll learn how to use these techniques to make your own projects more sophisticated and more original.

Structure of the Class

This class has two major goals. First, it aims to teach you a series of mathematical and computer science techniques that you can use to build better projects.

Each class we’ll explore a technique from one of these research fields. We’ll learn to understand the original research: how does the technique work? What problem is it trying to solve? What could it be useful for? And we’ll learn how to apply the technique in Processing to multiple different media, from text to images, from 3D models to movies and music.

The second goal is more ambitious. This class aims to help you launch your own relationship with the research feilds that will best enable your own creative goals. Are you interested in building interactive sculptures that track large groups of people outdoors? Are you interested in writing a program that visualizes the differences in style of different poets? Are you interested in building a video installation that automatically learns what parts of the room visitors tend to stand in? This class will give you an introduction to the areas of research that will help you make these creative ambitions a reality.

The class will be taught in Processing. Basic ICM-level familiarity with Processing is expected as well as a commitment to stretching your programming skills no matter your current level.

Assignments

  • Weekly Project After each class you should do a project using the techniques and code we learned in class. Your project could involve applying the techniques to new sources of data, using the interactions they enable in new ways or contexts, or (especially awesome) combining them with something else we’ve already learned. By midnight on the night before the next class, you should send a link to the class mailing list with the documentation of your project. We’ll discuss some of the projects in class.

  • Weekly Paper Summary Before each class you should select a paper from the Makematics database of papers or one you’ve found in your own searchings. Write a blog post about it that describes, in your own words, what the paper does. You don’t need to capture exactly how it works, but try to describe what techniques it uses and what effects it achieves. This description can be short, no more than a handful of sentences. Reproducing images from the papers is highly encouraged. Then write a quick description of how you would use this technique in a project. Feel free to flesh out the project with sketches, etc — whatever it takes to make the idea vivid for us. Email a link to your blog post to the class mailing list by midnight the night before class.

  • Final Project For a final project you should use what you’ve learned in class to do an initial prototype of your dream project (see the Special Assignment below under Week 1). If you cannot figure out how you can use what we’ve done in class to prototype your dream project, email me or come talk to me and I’ll help you work something out or find an additiona technique that can help. The final projects will be presented on the last day of class.

Grading

  • If you miss two or more assignments by the end of the semester, you will fail.
  • If your assignments reguarly show lack of effort, or are reguarly turned in late, you will fail.
  • If you miss more than one class session, or are regularly late, you will fail.
  • If you do not respect your fellow classmates, you will fail.
  • Otherwise, you will do awesome.

Class Schedule

Week 1: Introduction / Marching Squares (September 10)

Marching Squares is one of the most venerable and widely-cited techniques in computer graphics. It’s a technique for finding continous contours around the areas of a grayscale image that have a given color. It’s the basis for the Photoshop “magic wand” tool.

This week, we’ll learn how marching squares works and we’ll look at some exciting uses of it. First, we’ll see how to use marching squares to turn any continuous color image, from Kinect depth images to weather data, from election maps to website eye-tracking data, into a series of layers that can be digitally fabricated to produce a 3D object. Secondly, we’ll follow along with some work happening in Ken Perlin and NYU’s Vision Learning Group that uses marching squares to track the fingers of a user’s hand with the kinect. We’ll see how you can turn a marching squares-derived contour into the locations of individual fingers.

I’ll provide you with Processing libraries that implement both the marching squares algorithm in general as well as the finger-tracking application in particular. (Many thanks to Murphy Stein for the code here.)

This opening week will also include an introduction to the class. Why should artists and designers wrestle with copmuter science research? How can you get started in such a seemingly impenetrable and advanced field? We’ll also go over basic class procedures — homework, grading, final projects — as well as taking an overview of the material we’ll be covering in the class.

Links:

Special Assignment What is your dream project? What project have you always fantasized about building but always considered out of your reach? The purpose of the Makematics approach of mining computer science research for creative techniques is to enable you to come closer to acheiving these dream projects. For next class, prepare short descriptions of two projects you’ve always dreamed of doing. Write them up (as a blog post or in an email) and send them to the class mailing list.

Week 2: Linear Classification and Support Vector Machines (September 24)

Classification is the problem of sorting given data into a finite number of pre-existing categories. It is the basis for most machine learning applications. This week, we’ll learn about two forms of supervised learning: linear classification and support vector machines. Both of these techniques begin with a set of training data that has been hand-labeled by some person. They each then attempt to generalize from that training data in order to classify new, un-labeled data.

This week begins a two-class run on classification. This week we’ll focus on numerical data and we’ll learn the basics of linear classiciation. We’ll also see some of the limits of linear classification and we’ll introduce Support Vector Machines.

Support Vector Machines (SVM) is a powerful machine learning technique for classifying images, text, and other data into groups. SVM starts with human-classified data that demonstrates a series of categories. Once the algorithm is trained on this data, it can then place new unknown examples into the same categories based on their resemblance to the training data across a number of factors.

This week, we’ll start with the basics of SVM, training a model with 2D numerical data. Next class we’ll move on to fancier SVM tricks such as representing images and other forms of more complex data for use in training sets.

Linear classification and SVMs are widely used to detect spam, identify objects in images, performing hand-writing recognition, etc.

  • Homework review
  • More Pixels Law
  • Intro to the problem of classification
  • Linear classifier in 2D (t-shirts and ages)
  • Linear classifier in multiple dimensions (matchmaker example)
  • The Kernel Trick
  • 2D SVM for multi-groups
  • Tips on data gathering
  • The Assignment

Links:

Week 3 Support Vector Machines Continued (October 8)

This week we’ll dive much more deeply into Support Vector Machines. We’ll start off by reviewing some of the theory. We’ll discuss the limitations of linear classification: it’s sensitivity to outliers and it’s inability to separate certain data sets. We’ll look at how SVMs overcome these problems. We’ll learn what a support vector is and we’ll discuss “the kernel trick”.

After the theory, we’ll look at applications. We’ll cover the idea of “vectorization”: how to turn texts, images, and other media into data that can be classified using SVMs. Specifically, we’ll look at working with images using color histograms and Histogram of Oriented Gradients. And we’ll look at working with text using a technique called “bag of words”.

We’ll look at applications of SVMs for hand gesture recognition, distinguishing birds from squirrels, recognizing toys, and detecting spam.

  • Homework Review
  • ArrayList Review
  • LinearClassifier and outliers
  • Support Vectors
  • Kernel Tricks
  • Different Kernel Types
  • Vectorizing images with color histograms
  • Evaluating different training parameters
  • Vectorizing images with Histogram of Oriented Gradients
  • Other ways of vectorizing images
  • Where to find training images
  • How to make a training images
  • Vectorizing texts with “bag of words”

Links:

Week 4 Principal Component Analysis (October 22)

Principal Component Analysis (PCA) is a statistical technique for finding patterns in complex data. PCA takes complex data such as all of the pixels in an image, all of the points in a 3D model, or all the documents in a large database and uses statistical analysis to find the aspects of the data that are most meaningful, i.e. that determine most of the differences between the individual items in the set.

PCA can be used to detect areas of images that have been photoshopped, perform facial recognition, determine the orientation of 3D models, determine the political bias of news stories, etc.

  • Homework review – So much awesome!
  • Midterm checkin
  • An overview of the Makematics process: turn stuff into numbers, do math, profit
  • Intro to Linear Algebra: Matrices, Matrix Multiplication, Eigenvectors, and Eigenvalues
  • PCA: Variance, Covariance, and Principle Components
  • PCA and data compression
  • Applying PCA to Images with OpenCV
  • Applying PCA to 3D points with Kinect
  • Eigenfaces and face recognition

Links:

Week 5: Dynamic Programming (November 5)

Dynamic Programming is a general problem solving technique that automates the process of breaking a problem down into sub-problems that can be solved individually and then combining the results into a general conclusion. It is especially useful when the sub-problems occur repeatedly as it stores the results of each sub-solution for re-use.

Dynamic programming can be used to detect the beat of a piece of music, to automatically eliminate or repeat areas of an image so it can be resized without changing its most important parts (seam carving), and to help a robot plan a path through an environment.

Links:

Week 6: Bayes' Rule (November 19)

Bayes' Rule is a probabilistic technique lets you figure out the most likely causes of observed evidence based on the likelihood of those effects. Or, as Geoff Bohling puts it, Bayes' Rule “lets us turn information about the probability of different effects from each possible cause:

Bayes' Rule Cause to Effect

into information about the probable cause given the observed effects:"

Bayes' Rule Cause from Effect

Bayes' rule can be used to filter spam, to automatically translate texts between languages, to identify people from their gait, etc.

  • SIGGRAPH Asia Technical Papers Trailer!
  • Homework Review
  • You guys are Awesome
  • Guest critic announcement
  • Thinking Fast and Slow
  • Introduction to Bayes Rule
  • The drug test example
  • The taxi cab example
  • The formula P(X|Z)
  • Bayes rule in robot localization
  • A Plan for Spam: Bayesian spam filtering
  • Introduction to the HashMap
  • Shiffman’s implementation of Bayesian spam filter
  • Bayes rule in IBM’s Watson

Links

Week 7: Present Final Projects (December 3)