We needed to choose 1-2 technologies to focus on for the Stamford Hackathon, and amongst the various languages, tools and tech we could’ve chosen, we went with Artificial Intelligence, along with Internet of Things.
Artificial Intelligence: A Definition
Let’s start with what AI is – most folks have a view distorted by Hollywood that assumes ‘self-awareness,’ and often includes some form of world domination.
AI is first and foremost about creating computers that can do things that usually require a human’s intelligence. However, saying a thing is intelligent because it does what intelligent people do isn’t really a definition, though that’s what a google search will turn up.
A simple definition, then: Computers are considered to possess Artificial Intelligence when they are able to deal with complex problems that require managing ambiguous inputs. In contrast, ‘normal’ computers need precise inputs and little mistakes make them freeze.
Great examples of AI are speech recognition, machine vision, and machine learning. Why are these hard for computers, when they’re natural to humans?
Take for example speech recognition. Anyone who’s learned a second language can tell you that separating, and then understanding, words when they’re spoken naturally is very hard. Real speech is sloppy, fast, skips around and is exactly the sort of ‘ambiguous input’ I mention above. It has taken a great deal of work for researchers to create systems that can really understand language, and even then Siri gets it wrong a lot of the time.
Artificial Intelligence Enables Intuitive Applications
Anyone who is not a programmer can tell you stories of being frustrated at how stupid their computer has behaved at some point. Normal computers cannot infer anything, they cannot guess at what you mean.
Before AI, everything you did with a computer required precisely the right inputs to get what you want, and that’s just not how people normally behave. We assume that if you can do something useful, you’re smart enough to understand what I mean when I say something my way. People have trouble with the concept that computers can find the square-root of Pi in a millisecond but don’t know that I didn’t really mean to delete that file.
Thus a major promise of Artificial Intelligence is the creation of applications that can infer things about what you want. By understanding what you say, or by understanding your personality, or assessing the intent of what you’ve written, AI-enabled applications can begin to respond intuitively. As that becomes more and more common, it is going to become required of everything, so our tech community needs to get an understanding of how to work with these AI services.
Why We Chose AI
The Stamford Hackathon has one primary goal: to foster the tech community in Connecticut. Giving developers and technologists the ability to really work with Artificial Intelligence services gives everyone the chance to learn how they work, what they can and cannot yet do.
Our partners, IBM, have a fantastic offering that’s perfect for Hackathons in their Watson platform. As delivered through Bluemix, Watson has simple API’s that allow teams to create services in a plug & play manner, vs. actually doing the AI themselves. And that’s the point of the hackathon – getting accustomed to using Artificial Intelligence in everyday applications.
Stamford Hackathon 2.0 is this weekend.
So if you’d like to see how our teams use AI, come to the Stamford Innovation Center on Sunday, Feb. 21st at 2pm. The presentation will be open to the public, and we’d love to see you there.
And if you’re a developer, designer or project manager, join us at the Stamford Hackathon from Friday, Feb 19-21st. We’ve got great prizing from Uber, Harman and more, and already have well over 100 developers & students coming. Sign up here.
It promises to be a fantastic weekend, we can’t wait to see you there.