Practical informations
05 Nov
Sat., 10:00 - 6:00
Palais des congrès
Gene Kogan will run us through the theory and techniques of machine learning applied to creative practices. This workshop will introduce participants to core algorithms used for parsing, visualizing, and discovering patterns in complex data, focusing on multimedia like images, sounds, and text. The goal is to learn how to model complex and nonlinear forms of interaction across multiple modalities in interactive systems, enabling rich new forms of real-time instruments for generative art or musical performance. A broad range of applications of interest to graphic artists, game designers, filmmakers, and sound designers will also be surveyed. Participants don't need advanced knowledge in machine learning but should have at least basic experience with programming in a text language like Python, Java, C++, or in a patch-based one like Max/MSP. Participants should bring their own computers on which the prerequisite software should be installed, which Gene Kogan will specify in more detail shortly before the workshop. Folders of images, sounds, or text samples that they wish to work with, as well as networked devices (like Kinects, webcams, joysticks, microphones, Leap Motions, etc) are also options. Last but not least, it would definitely be helpful to watch an introductory video from ml4a.github.io/classes/ in order to make the most of this workshop.