Saturday, May 24, 2014

The Unlock Project

When reading the comments under an "Ask Slashdot" story about a reader describing a family member with locked-in syndrome, I learned about the Unlock Project.  In their FAQ, they mention that they are seeking volunteer programmers to write Python code for the project.  It looks like it would be a great opportunity to contribute for those who have the skills and spare time.

Thursday, May 15, 2014

Robot cars, utilitarian ethics, and free software

Robot cars could very well be programmed to kill their passengers under certain circumstances, namely, when doing so would save a larger number of lives.  In other words, they would be programmed to be utilitarians.

The idea that the end justifies the means has been heavily criticized as the basis for a general moral theory.  Given the wide variety of viewpoints on the matter, it seems to me that the only sensible approach is to ensure that passengers of robot cars have full control over the software that runs them.  The free software movement has been highly successful in their endeavor of giving users control over desktop PCs.  The movement is working towards the same goals with regard to smartphones.

For the sake of safety and giving users control over their environment, extending the free software movement to robots of all kinds may well be a moral imperative.


Thursday, May 8, 2014

MAICS-2014

A couple of weeks ago, I had the privilege of attending MAICS 2014 in Spokane, Washington.  It was my first visit to the state, and Spokane is a beautiful city.

I presented the work described in my paper "Creating Visual Reactive Robot Behaviors Using Growing Neural Gas".  It was well-received, and I got some useful and interesting feedback during the question period.  One particularly noteworthy observation was that I could consider incorporating color into the GNG nodes.  In my current implementation, I strictly use intensity values.  Creating an HSI-based GNG node should be straightforward.  I'm looking forward to experimenting with this approach when I get a chance.  (For more details, including videos, check out a previous blog post describing my work.)

I enjoyed most of the paper presentations.  Here are some themes I identified in this set of papers and presentations:
  • Fuzzy logic remains a go-to technique for bridging natural language and numerically-defined problem spaces.
  • The deceptively simple naive Bayes classifier retains a strong following.
  • Much progress has been made in creating interactive chat systems, an area to which I have not been paying much attention.
There were a few papers that I found particularly interesting.  Pakizar Shamoi presented a paper ("Fuzzification of HSI Color Space and its Use in Apparel Coordination") describing the use of fuzzy logic to describe different parts of HSI color space.  She then applied the fuzzy logic encoding to the problem of relating natural-language description of fashionable clothing to images of clothing in an image database.  The use of fuzzy logic as a bridge between a text description and an image database lacking other annotation struck me as very innovative, and I think this technique could be very useful in a number of areas.

Every time I teach my artificial intelligence course, I cover several different supervised learning algorithms.  A consequence of this is that inevitably several students become interested in methods for combining classifiers.  Joe Dumoulin of NextIT Corporation presented an interesting method for doing exactly that.  If future students wish to experiment with this concept, I'll point them towards this paper.

I have always been most interested in exploring the idea of an intelligent agent through mobile robotics.  The presentation by Chayan Chakrabarti and George Luger persuaded me that I need to pay some attention to advances in interactive chat systems.  I plan to investigate how I might create a programming project that is simple enough to be completed within a week or two but sophisticated enough to include concepts akin to those described in that paper.

Syoji Kobashi presented some excellent work he had done in automated detection of candidate sites for brain aneurysms.  His system builds a 3D model of cerebral blood vessels, and then does some pattern matching to identify candidate trouble spots.  It was a great example of how sheer hard work and persistence with a difficult problem can pay off with excellent results.

Yuki and Noriaki Nakagawa gave a live demonstration of a new robot arm they have designed.  Their insight is that all too frequently the utility of robots is limited by the perception (and reality) that it is too dangerous for humans to touch them.  To ameliorate this, they are developing robots that can interact safely with humans, even to the extent of tactile interaction.  The arm they demonstrated featured a very clever design.  The gripper is effective but physically incapable of harming a human by crushing.  I enjoyed the opportunity to physically interact with the device, and I look forward to seeing what other innovations their company produces.  (You can see me arm-wrestling the device in the photo below.)

I'd like to thank Paul DePalma and Atsushi Inoue for all of their hard work putting the conference together.  I am looking forward to attending MAICS-2015 in Greensboro, North Carolina.


Friday, May 2, 2014

Show, don't tell your robot what you want

It would be valuable to get a robot to do what you want without you having to program it.  If you could just show it what you want, and have it understand that, much programming effort could be saved.

I've recently taken a first step towards creating such a system.  It is very much a prototype at the moment.  You can read my recent paper for the details.  Here is an outline of the basic idea:
  • Drive a camera-equipped robot through a target area.  
  • Have the robot build a Growing Neural Gas network of the images it acquires as it drives around.  
    • Growing Neural Gas is an unsupervised learning algorithm.
    • It generates clusters of its inputs using a distance metric.  
    • It is closely related to k-means and the Kohonen Self-Organizing Map.
    • The number of clusters is determined by the algorithm.
  • For each cluster, select a desired action.
  • Set the robot free.  
    • As it acquires each image, it determines the matching cluster.
    • It then executes the action for that cluster.
Here is a screenshot of the "programming" interface.  This example was from a 26-node GNG.  Each cluster has a representative image.  The action is selected from a drop-down menu.  For these first experiments, we are training the robot on an obstacle-avoidance task.  The goal is to maximize forward motion while avoiding obstacles.

Needless to say, the system is not perfect at this stage.  The biggest problem is that some of the clusters are host to ambiguous actions.  So sometimes the robot reacts to "ghosts", and other times it hits obstacles.  Going forward, a major research focus will be to find a way to automate additional growing of the network to compensate for this problem.  

I'd like to share two videos of the robot in action.  In the first video, the robot does a good job of avoiding obstacles until it gets stuck under a table:

In the second video, the robot sees a number of ghosts, but for the most part it maintains good forward motion for over five minutes:

Idealistic students and the job market

In class on Tuesday, during a discussion of computing ethics, the students were of the opinion that it is unethical for companies to require that they sign away all intellectual property rights for the duration of their employment.  They were also speculating as to whether it is unethical for them to even agree to work for such a company.

Such clauses are common, of course, because programming jobs do not lend themselves to being defined in terms of "hours worked".  Since usable work can be (and is) produced at all hours of the day or night, it makes sense from the company's point of view to claim ownership of all ideas produced by the employee.

It was interesting to juxtapose this discussion with an experience I had at a conference (MAICS 2014) the previous weekend.  Two research presentations were given by employees of a company, NextIT, which develops interactive customer-service systems for its clients.  Now, I do not know if NextIT maintains an intellectual-property clause in its employee contracts, but it was notable that its employees were presenting the technical details of innovations they had created in order to make their systems work better, with the encouragement of their employer.

This suggests that disclosure of ideas is not necessarily the commercial-suicide scenario businesses seem to fear.  Removing such clauses may well attract stronger employees with a higher sense of goodwill without really doing any damage to the company's business plans.  It would be very much worth examining whether the harm done by such clauses exceeds their (alleged) benefits.