Co-Founder
Trove Securities
Email: connord [at] media.mit.edu
Resume | Google Scholar
This collaboration between Synbiota Inc., and Genomikon Inc. (both Synbiota companies) leveraged genetic engineering, decentralized manufacturing, and the web to collaboratively design a micro-organism that expressed the greatest ability to produce violacein, a valuable anti-cancer molecule.
The success of #Sciencehack proved that Synbiota's DIY biotechnology could be used for real research, and that it was possible for a loosely knit group of artists, designers, biologists, hackers, and students to bio-manufacture valuable medicine at home.
References
Winner: SXSW Interactive Accelerator
Article: "Growing Violacein Factories w Synbiota" - Britt Wray
Article: "Distributed Science" - Forbes
Article: "Kitchen counter bio hacking" - Joi Ito
Archive: #ScienceHack website
One of the problems with mobile media devices is that they may distract users during critical everyday tasks, such as navigating the streets of a busy city. We addressed this issue in the design of eyeLook: a platform for attention sensitive mobile computing. eyeLook appliances use embedded low cost eyeCONTACT sensors (ECS) to detect when the user looks at the display. We discuss two eyeLook applications, seeTV and seeTXT, that facilitate courteous media consumption in mobile contexts by using the ECS to respond to user attention. seeTV is an attentive mobile video player that automatically pauses content when the user is not looking. seeTXT is an attentive speed reading application that flashes words on the display, advancing text only when the user is looking. By making mobile media devices sensitive to actual user attention, eyeLook allows applications to gracefully transition users between consuming media, and managing life.
References
ACM UIST. Seattle, Washington. 2005.
Connor Dickie, Roel Vertegaal, Changuk Sohn and Daniel Cheng.
Available: ACM - eyeLook, Local: p103-dickie.pdf
Patent: #8,672,482
Commercialized by SAMSUNG in 2013.
Flexcam is a novel compound camera platform that exploits flexibility as a means to dynamically re-configure images captured by a camera-array. Flexcam's camera-array has altered optical characteristics when flexed, allowing users to dynamically expand and contract the camera's field of view (FOV). Integrated bend sensors measure the amount of flexion in the device. The degree of flexion is used as input to software, which dynamically stitches images from the camera array and adjusts viewfinder size to reflect the virtual camera's FOV.
References
ACM CHI. Austin, Texas. 2012.
Connor Dickie, Nicholas Fellion, Roel Vertegaal.
Available: ACM - Flexcam [PDF]
Available: from author - Flexcam [PDF]
Press: Gizmodo
We present a prototype attentive cell phone that uses a low-cost EyeContact sensor and speech analysis to detect whether its user is in a face-to-face conversation. We discuss how this information can be communicated to callers to allow them to employ basic social rules of interruption.
References
ACM CHI. Minneapolis, USA. 2002.
Roel Vertegaal, Connor Dickie, Changuk Sohn and Myron Flickner.
Available: ACM - Attentive Cellphone
Eye Contact Sensing Glasses report when people look at their wearer. When eye contact is detected, the glasses stream this information to appliances to inform them about the wearer's engagement. We present one example of such an appliance, eyeBlog, a conversational video blogging system. The system uses eye contact information to decide when to record video from the glasses' camera.
References
ACM CHI. Vienna, Austria. 2004.
Connor Dickie, Roel Vertegaal, Jeffrey S. Shell, Changuk Sohn, Daniel Cheng and Omar Aoudeh.
Available: ACM - eyeBlog
Media: link (Boingboing.net), link (Slashdot.org)
Augmenting and Sharing Memory with EyeBlog
1st ACM CARPE, ACM Multimedia. New York, New York. 2004.
Connor Dickie, Roel Vertegaal, David Fono, Changuk Sohn, Daniel Chen, Daniel Cheng, Jeffrey S. Shell and Omar Aoudeh.
Available: ACM - eyeBlog 2.0
Media: .pdf (Globe & Mail)
We present LookPoint, a system that uses eye input for switching input between multiple computing devices. LookPoint uses an eye tracker to detect which screen the user is looking at, and then automatically routes mouse and keyboard input to the computer associated with that screen. We evaluated the use of eye input for switching between three computer monitors during a typing task, comparing its performance with that of three other selection techniques: multiple keyboards, function key selection, and mouse selection. Results show that the use of eye input is 111% faster than the mouse, 75% faster than function keys, and 37% faster than the use of multiple keyboards. A user satisfaction questionnaire showed that participants also preferred the use of eye input over other three techniques. The implications of this work are discussed, as well as future calibration-free implementations.
References
ACM OZCHI. Sydney, Australia. 2006.
Connor Dickie, Jamie Hart, Roel Vertegaal and Alex Eiser.
Available: ACM - LookPoint
For Ontario House at the 2010 Vancouver Winter Olympic Games, InteraXon created Bright Ideas, an installation that allowed users in Vancouver to control the lights on the CN Tower, Niagara Falls and the Parliament Buildings in real time using their thoughts alone.
Thoughts were turned into light patterns instantaneously as users’ brain’s digital signals were beamed over the Rocky Mountains, across vast prairies all the way to three of Ontario’s most iconic landmarks – a distance of 3000 km. Live video feeds from the 3 sites were projected on large screens in Ontario House and allowed participants and onlookers to see their impact in real time.
The invention of Kameraflage display technology was a spin-off from my work in computer vision for eye-tracking at MIT. I learned that both CCD and CMOS image sensors can see a broader spectrum of light than a human, and that often the firmware of digital cameras display both Ultra Violet, and Near InfraRed as human visible colours on a display.
Initially a wearable electronic fashion accessory, Kameraflage display technology eventually found it's home in advertising displays where the technology was licensed by Diageo, Activision, and AKQA.
Debut: Paris Fall Fashion Week, 2007.
Winner: Thailand Textile Institute's Top 100 Innovations, 2007.
Invited: CES Technology Fashion Show, 2011.
Patent: #8,531,308
The vending machine platform was an experiment bringing together what we had learned in Attentive User Interfaces and Considerate Computing.
Physically the platform consisted of an array of 12 video screens in place of buttons. Video could span acress all or a portion of the 12 screens producing a fragmented large display. Two more 22" monitors were along the banner section of the machine. We had full control over the pysical vending aspect of the machine including taking money, making change and dispensing product. We also included a camera for passive audience auditing.
Software considerations included using a customized version of AttentionMeter, a sophisticated presence system that can register a wide range of affective feedback via implicit facial gestures. An associated media experience was developed for this new platform. This experience included interactive games, news and sales.
References
Patent #8,594,838
Ted Selker, Connor Dickie, Matthew Hockenberry, John Wetzel and Julius Akinyemi. 2006
Exhibited at Wired NextFest NYC. 2006, PepsiCo. HQ., Purchase, NY. 2006, MIT Media Lab 2006