Others have written about this before, but I’ll underscore the sentiment that managing a local development environment on OS X where that environment requires Open Source Software is a royal pain. At the companies I’ve been involved with, we generally eschewed local development environments and instead gave everyone access to a development server that included the requisite databases and web servers and vhost entries. It worked OK, but there are some significant drawbacks. Namely, unit testing, environment experimentation, single point of failure if the dev environment goes down, and the needs of a developer to refresh their own copy of a dev database or make other similar changes tend to suffer.As a hobbyist with a simpler environment, or as a developer that’s deploying to Heroku or other cloud platform, local development is the way to go, and here is where Mac OS X makes life difficult. There are several package management systems out there that tend to step on each others’ toes (and it seems language and framework ecosystems always prefer the one that you’re not using). Mac OS X also tends to haphazardly ship versions of Python or Ruby or whatever that are a couple of versions behind, then not upgrade them until they do an OS refresh. That refresh (cough, Lion) will fail to mention it’s upending your world until you try to use your environment that’s always worked.Here’s my solution: just use VirtualBox. Deploy an Ubuntu or Debian server, link that server to your local development directory and you’re done. Then use the excellent package management that Linux affords to setup your environment in about ten seconds. This has another advantage in that you can also use all your deployment hooks (Chef or Puppet) that you’re using on your production servers.Once you’re up and running, here’s how I work: I edit and run git from outside the virtual machine, and run the environment and web browser from within the machine. Still todo: see if I can use my OS X browser to hit the virtual machine’s private IP so that all my tools are running externally (a little easier for workflow) so the virtual machine is just acting like an external server.Now you have a fully fledged (free, and always available) server, and you can still retain your Mac toolchain when and where you want it without worrying about Apple and OS X pulling the rug out from under you. Remember: encapsulation of a work environment is just as important as encapsulation of code.
My first app store experience was many years ago, before Apple, before iOS, before any of the other app stores you see today. It was Redhat’s Redcarpet subscription service which delivered a library of applications (packages) via the internet in an easy to use command line tool. There was even a GUI if I remember correctly. Then Debian/Ubuntu came around with their package repositories and it was such a major usability difference between Linux and Apple/Microsoft that it was only a matter of time before it caught on. Of course, the ideals behind the Linux offerings of ease of use, reliability, and compatibility are supplanted somewhat by the key aims of profitability and control inherent in modern app stores, but who’s counting?Things I wish the Apple App Store had (these are post-Lion upgrade thoughts)
- Some way to know what the schedule of app update notifications is – it’s unclear to me if this is daily, whether I have to have the appstore application running all the time, etc.
- The App Store should intelligently close your application when updating an existing application.
- The App Store should be able to store your credentials, and not require credentials for doing an update if the app is in safe state (e.g. closed).
- There should be a compatibility layer in Lion that lets you run your iOS apps on your Mac. I’m sure this is coming, but I wonder when.
- The App Store should offer to scan your hard drive and find applications that it can manage for you.
- On the Featured and other pages, you should be able to hide apps you’ve already installed.
- Somewhere down the line, it might be interesting to have a “lists” feature like Amazon. Apple could even show what apps certain celebrities use in lists like the inflight magazines do for travel accessories. Maybe that’s too much on the pointless-marketing side.
I’m sure there are some other options that I’m missing, but overall I’m happy with the experience. The App Store managed Lion install was incredibly painless, and so much nicer than having to mess with the Apple store, or a nasty CompUSA.
Awhile ago I wrote an Open Letter to Mint.com laying out some major concerns I have with their service and their security implementation. Almost all comments both here and on Hacker News and Reddit were divided into three categories:
- From non-Americans: How is a service like Mint.com even possible or legal? US Banks don’t have two factor security?
- I totally agree that Mint.com and their service is insecure and I don’t use them!
- I agree that Mint.com needs better security, but their service is great and anyway, it would be too time consuming/too expensive/too hard/too impractical to implement these security improvements.
Between the time I wrote that letter and now, we’ve seen RSA (the only major token based two factor security provider) have all of its hardware tokens compromised to much public uproar. At Sentry Data Systems, we’ve had two factor security implemented for years using time based cookies and additional security questions to challenge users when they were logging in from a device that hadn’t been previously authorized. This is similar to how many banks in the US do two factor security if they choose to implement it. While not a HIPAA requirement, we felt that it was a great feature to offer that provided an additional layer of protection. We’d originally offered RSA SecurID tokens to customers but found that most customers balked at the price, and even if they did use the tokens, many would simply tape it to their computer monitor or keyboard, or they’d forget the token at home which would cause quite the contentious support call. This experience brought to the forefront several issues that I had with hardware based tokens:
- Casual users or those who didn’t value the two factor security benefit would simply leave the token lying around or affix it somewhere – it wasn’t natural to expect a user to carry one more thing with them day-to-day.
- If there was a compromise, you have to replace all of your hardware, for everyone, everywhere.
- They were expensive.
- They were highly recognizable and screamed to informed observers that you had access to a system that was considered high-value by someone.
I even went so far as to start sketching out an iPhone app that we could deploy for our customers but it seemed like quite a lift to do it well (a correct implementation is key in cryptography systems) and it was with much delight that I ran across an outfit called DuoSecurity based in Michigan. They have really put together a fantastic service that provides both SMS based (challenge/response) and one-time password (via an iPhone or Android app) options for two factor security. I signed up for the service, installed their package on my Ubuntu Linux server, and within about 15 minutes, I had a very strong two factor solution that avoids all the drawbacks of the hardware token approach…for free. Yes – they provide up to 10 users for free to let you get your feet wet and see how the system works. With the token being my phone, I’m not going to forget it, it doesn’t draw attention to itself, I can’t tape it to my workstation, and they can update the software if they need to. If their service goes down, you can configure it to not require the second factor (the default) or you can choose to prevent logins and keep a private key around for last-ditch logins. Of course, for those of us running cloud based servers, there is still the risk that your hosting account could get compromised giving an attacker shell based access to the machine – hopefully Slicehost and other services will implement this type of additional security soon (Amazon’s EC2 cloud already implements two factor security as an option). Duosecurity can be easily implemented with any web application, a lot of VPNs, and on your Unix/Linux servers quickly and easily. If you’re doing anything with medical, financial, or other sensitive data you should definitely check them out. If you just like additional protection for your own servers and services, they’re a great option as well. Just in case you’re curious: Duosecurity put up a great blog post about the steps they’ve taken to prevent compromise if they came under the same attack as RSA. A few thoughts on improvement:
- Give me an apt package please! I don’t want to compile things, and I don’t want to edit configuration files. These things make it hard to deploy on lots of servers. I talked with a support rep from Duosecurity and they told me this is in the works already.
- Put a login form on your website! They email the login URL to you but I shouldn’t have to remember it.
- It’s a little unclear to me if the pricing scales well- if I’ve got the same 35 users access 100 machines, does that mean I pay 35x100x$3? That seems expensive. Course, it’s still way cheaper than RSA but at least you could bind an account to a token and not worry how many servers you were accessing. It’s possible that a single user crosses the server boundary, but again, I’m unclear on that.
Bringing it all back to the original point – there is simply no excuse why a service like Mint.com doesn’t use Duosecurity to protect its own user’s logins. But the second issue still exists – how do banks provide consumers of financial data access without compromising the entire account? A poor man’s solution of sorts could be taken by banks providing read-only accounts for customers that use generated, revokable passwords. Google takes this approach with its own two factor implementation for Gmail. You get texted when logging in normally, but for other applications, you generate a password that can be revoked at any point. It seems like a decent compromise – you can’t control the account from that login, and the password is of sufficient length and complexity that it’s unlikely to be brute forced. My initial suggestion of using Oauth is essentially the same thing. Congratulations to the guys/gals at Duo Security on providing a really great set of tools for developers and users. I really hope it catches on and more and more providers begin offering two factor as an option.
Recently I decided I needed a little more flexibility with my phone situation. Years ago I was carrying two cell phones and had three Vonage lines while running my own business. This got consolidated down to a single iPhone, but that can be a little problematic particularly if you’re calling to/from international numbers. This week I ported my iPhone number to Google Voice (within 24 hours too), and got a new phone number for my cell that I’m hoping to keep private and function as a throwaway. However, I needed a bit more flexibility on some of the things I wanted to do, so I threw Tropo into the mix. Twilio lost out because Tropo provides free inbound and outbound calling.So here’s the path when you call my number: call comes into Google Voice, which forwards the call to my Tropo application, which then plays a menu and you can either punch out to Sentry’s main number or continue to ring my cell. Text messages are forwarded by Google Voice, and the net result is that for inbound calls, I’ve effectively decoupled the phone number I’ve had for seven years from any handset or location and added a whole bunch of flexibility.It’s almost eerie how much power Tropo gives you over your telecom setup. With a few lines of code I can transfer calls, accept inbound international calls with a local number, kick out text messages, provide a menu, have their computer voice speak any text I want, etc. Call quality is crystal clear through both Google and Tropo, and I have yet to have any reliability problems. In 2004 we thought it was amazing replacing a 150k Avaya PBX with Asterisk, but this is replacing all of that with about twenty lines of code. For free. With no setup or ongoing hardware or maintenance costs.I’d say the only real drawback to the situation is the inability to spoof outbound caller id with native dialing – it would be interesting to see if Apple allows you too hook other providers into it’s native dialer (yeah right) or if this is a feature within Android. It definitely needs to be implemented at some point – and then we’d have true telecom nirvana.
A few weeks ago I attended Github’s CodeConf in San Francisco. While there, I got to meet quite a few really accomplished technologists (hackers) and discuss a variety of projects, processes, programming methods and more. One of the most interesting moments for me came over lunch while talking to the CTO of a very well known blog which clocks in over 5 million unique visitors a month. Like most sites of its type, it receives almost 100% of its revenue from ads. According to him, one of the largest (new) challenges they were facing was that advertisers are beginning to buy ads targeting the blog’s fans from Facebook, not the blog itself. In other words, to get at the blog’s users, advertisers were paying Facebook less money to directly market to the blog’s fans on Facebook.The more I think about it, the more I think this is a major problem for almost every ad supported site out there, and it could be the pitch that Facebook is using to bolster its insane valuations. Right now, there are probably no less than a dozen Googlers being kept up at night worrying over this very problem, not to mention the admen at hundreds of highly trafficked blogs and other internet properties. After all, if I can immediately pitch my competing product to your customers without paying you a dime, I’ve got a huge advantage, you’ve got a huge problem, and Facebook has an unbelievably great strategic position.Maybe you’re reading this and thinking “yeah that’s old news” and it probably is to many, but having never worked at an ad supported organization, I’d certainly never thought about it before. I’ve also never heard it articulated online, and I’m wondering how many organizations even realize this is happening. Note there is a two-fold risk here: ad supported properties risk losing ad revenue to Facebook, and they risk exposing their customers to competition. If you’re an advertiser, you’d much rather know that you’re reaching out to all 10,000 fans of Blog X with the stats to show you who clicked, etc., vs. an anonymous 100,000 impressions. Note that even if a Blog chose not to have a Facebook page to attempt to combat this kind of thing, Facebook can still harvest those users who “like” the Blog in their profile.Before, I used to think that the benefits of a Facebook presence for an organization outweighed the downsides, but now I’m not so sure, particularly for ad-supported businesses. It’ll be interesting to see how this plays out.
The last three places we’ve lived in South Florida were “gated communities” which is supposed to make you feel exclusive and special. They provide zero additional security (had a car stolen from one of them in the middle of the night) are often broken, and even when they work they’re a pain. All of our gated communities would link your personal code to a phone number of yours, and when visitors keyed in “112” it would ring your phone.This causes problems:
- The gate dialer can only link to one phone. If your wife is traveling and you want some pizza to be delivered, the wife may not be able to pickup the phone and press 6 to let the pizza in.
- Most can only link to one or two area codes. One of the systems could only link to a 954 area code number.
- If you’re riding with someone else and don’t have your remote with you, you can’t get in if your wife isn’t with you, or has the cell phone in a bag in your trunk.
The Wife has been out of town for a few days and this finally irritated me to the point where I headed over to Tropo.com and provisioned a simple phone application. Now when you dial the phone number of my Tropo app, it answers, says “Opening the Gate!” and plays a number 6 key press which tells the gate to open. Perfect.I heard about Tropo out at CodeConf in San Francisco and have wanted to play with it but didn’t have a problem to solve until now. The entire thing took about 10 minutes to setup with the only really painful thing being the hunting down of a key press sound from http://www.freesound.org/ and the subsequent conversion to a GSM format. I ended up using the excellent Sox command line sound converter to make the conversion, and then we’re in business. Total cost for the whole thing was zero dollars.The Tropo service is really nice and their documentation is good too. Their UI for their website is a little clunky in spots. For example, picking an area code for your number is really painful with about 50 city suggestions and no way to search for an area code or specific city. They’re not alphabetized as far as I can tell either and the city names are super specific so it just makes it hard. Also, I couldn’t find a way in their API to generate a key press tone which meant I had to mess with my own sound files. That should be built right in or they should provision a directory of key press sounds with your default files.All in all a fun little project to get done while on Amtrak bound for Orlando, and now I can open my gate whenever I want. Tropo has done a great job with their platform and I’d highly recommend it for these types of tools or any kind of telephony or communications application.
I’m out here in chilly San Francisco for CodeConf, a conference for programmers sponsored and hosted by the folks at GitHub. This is my first real time spent in San Francisco (previously it’s just been through the airport or a one night stay due to an aircraft engine problem while trying to make it through the airport) and it was with much delight that The Wife and I experienced the Ferry Building and its farmer’s market for breakfast. Despite problems with the staff finding my registration, I made it in time to hear all the talks for the day, and in general the speakers were very good and subject matter covered was interesting.Dr. Nic WilliamsThe first talk was by crazy Australian Dr. Nic Williams who talked about the importance of learning something, then making it into a tool, and then once you get into this habit making tools that help you build tools faster and more efficiently. Simple, and even though it was a little forced at times, he made the talk unforgettable with movie clips from Tinkerbell, the theme song from the A-Team, and his choice of clothing which was a pink tutu and fairy wings. He left the podium with AC/DC blasting to applause.
- He talked about building textmate snippets to help with database migrations in rails.
- Bundling those snippets for better/easier distribution with .dmg
- Building a tool to help with the construction of .dmgs (choctop)
All in all interesting, if a little bizarre.Coda HaleNext up was Coda Hale from Yammer who gave the best talk of the day which was easily worth the price of admission for the entire conference (and I’m saying this only 50% of the way in). His topic was code instrumentation and he discussed the various techniques and ways we need to measure our software so that we can implement the OODA method: Observe, Orient, Decide and Act. OODA is a combat operations process that was designed in the Airforce and many of us in the development or operations groups of tech companies can see the similarities between combat and figuring out what’s happening with complex software stacks.With OODA as our goal, we need to employ five different ways of measuring: Gauges, Counters, Meters, Histograms, and Timers. Yammer has wrapped these tools into a JVM friendly project located here that they use to publish meaningful metrics to downstream analytic consumers like Nagios and Ganglia. He spent some time going into some detail on the statistical models they use to break histograms down into meaningful quantiles without torching huge amounts of disk space, and while he recognized most of us in attendance aren’t using the JVM, the challenge was laid down to get these tools into the hands of programmers using other runtimes or languages.The bottom line was: “If a piece of code affects our business, we must instrument it,” and this was underscored with an example he provided of two different ways to call a sort. One should be faster due to its underlying construction that we may or may not know, but then he showed how the code calling it actually had a sleep(100) within the call loop. In other words, without instrumenting this on production we have no idea which one is faster, and we’re probably wrong without closing the gaps between our mental model and the executing code and proving the gap.This was an absolutely fantastic talk, and his slides can be found here.Other Speakers / Presenters
- Jonathan Rentzsch talked about “Design by Contract” programming or “Contract Driven” programming. The examples seemed to be preprocessed assertions (not unit tests)that are fed and managed by the compiler. More research needed.
- There was a great demo provided in about 3 minutes from the folks at Tropo.com (a Twillio competitor) that showed a Tropo app connected to a redistogo.com Redis queue that then talked to a Node.js process and when the demo-er called a phone number it asked him which color he wanted, used voice recognition to process “blue” or “yellow” and then the background of the website changed in near real-time.
- Creator of Node.js Ryan Dahl spent his time in very animated fashion blasting out some memorable one-liners while discussing the efforts that are underway to port Node.js to Windows.
- One of the founders of Django talked about the need for clear documentation and said some controversial things about tools like rdoc or jdoc. His bottom line: make sure you’re answering the who, what, when, where, and why in your documentation and there’s no substitute for human written docs.
- Former Lifehacker Gina Trapani talked about the importance of community in Open Source projects. She’s currently managing/contributing to ThinkUp and talked about how many Open Source communities struggle to integrate and accept contributions from non-programmers.
The food that’s been provided has been fantastic showcasing a lot of local ingredients and vendors. The conference hall is probably a bit too small and a little cramped, and there is no power provided at your seat. The 75% of us who brought laptops whittled down to about 2% by the end of the day as we ran out of power. The night events all involve open bars at what seem to be nice venues. All in all, an enjoyable first day at CodeConf.
We had a great time last night watching Watson take on two humans in a round of Jeopardy. Or at least, I had a great time. The wife and her sister weren’t quite as into it as I was, but they watched it just the same.Here’s a recap:
- The show did a great job explaining what was happening (they burned half the episode on explanations).
- It’s interesting how most people don’t understand what the true challenge of this event is (even techies) – Watson has huge volumes of information (he knows a lot of stuff), but the real challenge is understanding the meaning behind a question. In other words, it’s an understanding/comprehension challenge, not a fact challenge.
- IBM came up with a really neat tool that showed the audience how Watson was playing the game. They would show the top three answers he came up with and a confidence interval. Watson would buzz in with the highest rated answer that crossed the confidence interval. If none of the answers made it across the threshold, he wouldn’t buzz in.
- Alex Trebek gave a tour of the datacenter which had ten-ish racks of IBM servers. The size of the install was very surprising to our non-technical viewers.
- Watson glows green when he’s confident in his answers, and when he gets one wrong, he glows orange. This feature was a big hit at our house.
- Two perfect examples came to light exposing the difficulty of this challenge. One question made references to the Harry Potter world and a dark lord who challenged him. It was clearly a Harry Potter question due to the contextual clues, but the answer was “Lord Voldemort”. Watson answered “Harry Potter”, but his second choice answer was “Lord Voldemort”. A human who understood the meaning of the question would never have answered in that way. The second occasion involved Jennings answering “the twenties” to a question, which was wrong. Watson buzzed in right after him and answered, “the twenties,” which no human would ever do.
One question I had was if the text transfer of the questions happens in step with Alex Trebek’s reading of them. Does it happen character by character or does Watson get a few precious seconds while humans are reading the screens? Conspiracy theorists would probably ask how Watson’s first choice was an 800 dollar question (unusual) and he hit the daily double immediately, but it could be part of the IBM team’s strategy.All in all, that was probably the most fun I’ve had watching a TV game show. Looking forward to the next two episodes.
Tomorrow Jeopardy will feature a computer facing off against former champions Ken Jennings and Brad Rutter. I have been looking forward to this for a couple of months since I first heard about it, and it is an amazing feat of computer science that a computer can go head to head with humans on a game show as complex as Jeopardy. IBM is the true leader in feats (stunts?) such as these having staged previous competitions against former chess master Garry Kasparov. The show has already been filmed (in a special sound stage at IBM’s location) but I do have an inkling of what the engineering team probably went through prior to the event.
My freshman year in college, I managed to convince my professors that I could and should skip the computer science prerequisite for everything, COS120. This put me into a data structures course in the first semester of my freshman year, and several linked lists and b-trees later, I was able to take any class I wanted in the spring. I made bee-line for a class called “Intro to Artificial Intelligence” as I was more than intrigued by the possibility that I could build Skynet and cause the human race its final doom. Sweetening the pot was a rumor circulating the labs that the class would make use of the very recently released Lego Mindstorms. Fulfilling the dreams of nerds and lego aficionados (with there being an admittedly strong correlation between the two groups) the Mindstorms were fantastically expensive to a college kid (several hundred bucks) and required programming knowledge to really make them hum along. This AI class in other words was my ticket to Lego Nirvana, not the other way around.
Legos and LISP
I managed to sign up for the class and was one of (I think) two freshman. Having spent high school and the fall semester doing work in C and C++ along with learning Perl and a new fangled language called PHP, it was quite a surprise to show up for class and begin learning LISP. For those of you who are non-techies, LISP is a complete paradigm shift from how most programming languages allow you to express yourself. The analogy might be like going from sketching in black and white to being handed multiple colors of modeling clay and being told to sculpt in three dimensions. LISP is the oldest programming language still used in a mainstream capacity today. LISP treats data and the program the same, so you can build a self modifying program which is one of the reasons Artificial Intelligence researchers and applications use it so much. LISP has a weird looking “list based” syntax (LISP stands for LISt Processor) and allows you to solve weird problems in fairly elegant ways. In fact, some like it so much and feel it is so powerful that LISP is the reason that Paul Graham (a well respected computer scientist and technology entrepreneur) credits for his business success. He felt he was able to elegantly and more efficiently out-build his competition simply because of their language choice. I don’t really agree with Graham, but the point is that LISP is a very cool and very different language that was a pleasure to learn. This class remains the only one in college where I did every single homework assignment. The problems were just too fun and the ways of solving them were extremely different. Along with LISP of course came the Legos. Almost immediately, we began building simple Lego robots that would following a line on the floor, be able to navigate a room, follow a light source in a dark room or navigate a maze. It was unbelievably cool and a testament to Lego’s design prowess that their system allowed for such flexibility. Each kit provided a command unit and several motors and sensors. There was a touch sensor, light sensor, and a few others, and the command unit would download the programs via a cable hooked into a computer. There was a Lego provided GUI-based way you could program the processor, but hackers had already provided several other languages, and we used Not Quite C (NQC) which was very similar to the C programming language with a few restrictions.
A Chess Playing Robot
All in all, we were having a blast, and just like kids get bored with a slide and start daring each other to go down backwards, or standing up, we began competing to build a bigger better robot. Our professor was also the chair of the department, and he sensed he had something brewing, so he got the class together and challenged us to come up with a project that could bring fame and glory to our small liberal arts university. A robot that could get us soda from the machine! A robot to gather up homework assignments and drive them back to the professor! Not as popular as the food or waiter suggestions. Finally from the back of the class came the ultimate idea: a chess playing robot! A senior in our class had been working on an independent study in addition to taking this class, and had produced a fairly workable Chess simulator, written in LISP. The class began buzzing with excitement. We could build the robot, then challenge students to beat it! I started to get a bad feeling about the project – building a Lego chess playing robot would be quite the undertaking. My good friend Aaron Williamson upped the ante – “I could get the university TV station to film a match!” Uh oh. A third student offered to get one of the most popular professors in the school to face off against the robot, and just like that we had ourselves our very own TV spectacle: Man vs. Machine. Nerds vs. Normals. Technology (us) vs. Philosophy (the professor).
The work was divided up and we immediately had to start pulling late nights as we only had a few weeks to get everything together. There would be a vision processing team (the robot had to know where the chess pieces were), a robotics team which would build the gantry crane and arm mechanisms, and the chess software team. The Public Relations we soon learned, would take care of itself. I was on the vision team, and our job, we felt, was quite possibly the hardest. This was 2000, and web cameras were novel, low resolution, expensive, and rather rare. At least, they were expensive for college students, so we used the only one we had available: a web camera attached to a Silicon Graphics O2 workstation that provided a 640×480 resolution color picture. It provided enough resolution so we could film the board, and our algorithm would take one picture, save it, then take another after the human moved, and compare them to determine which two pieces had moved. This seems pretty trivial, but it was complicated by a fisheye effect from the lens, and the fact that the robot arm (or human) wouldn’t actually place the pieces very accurately. Lighting and other conditions could also change depending on where the human was standing or for a host of seemingly random factors. As we started to work, even simple things seemed to derail us. Silicon Graphics apparently used a proprietary RGB format which was reverse of standard RGB, so we had to write a converter. Irix, the SGI OS, turned out to be a pain to develop for, at least compared to Linux, so we performed the processing on another computer. This meant we also had to set up the workflow between the IRIX machine which captured the image, and the Linux machine which processed the image, and then interface with the robot and the chess playing program, which ran on another machine. The date for the campus wide demonstration was looming, and we were determined to not be humiliated. The student newspaper ran a feature, and our opponent was quoted that “no hunk of metal would best him” to the delight of the student body.
Finally, the day was upon us, and we began preparing the lecture hall which would be ground zero. We had lots of equipment to move from our basement computer lab to the lecture hall, and as we began calibrating everything alongside the film crew which was also setting up, we noticed that the vision system wasn’t working. At all. It couldn’t find the edges of the board, couldn’t determine the grid, and it couldn’t see which pieces had moved. We were in trouble – the fluorescent lighting conditions of the basement worked but the incandescent lighting and TV lighting in the lecture hall didn’t and we needed to recalibrate everything. Working like mad we began tracking down all intensity assumptions in our program and moving pieces around until finally we got it working. The format of the competition was designed so that while the computer was thinking we’d explain to the audience what was happening, and our opponent would also discuss the philosophical ramifications of sentient machines. This was designed for another, secret reason – if something went wrong, we wanted to be able to troubleshoot without drawing too much attention. We had a backdoor programmed into the chess program which could reset it if there was trouble, or in the event of a catastrophic failure, we could take over and manually play out the match. The ethical dilemma of should we tell the audience what happened if that were to come to pass was hotly debated.
The clock struck 7:15, in came the audience, and on came the red blinking lights of the video cameras. The lecture hall was packed out, and the professor did a great job making jokes as we introduced the event and briefly explained the rules. Our class professor could have died of happiness at that moment. As we began the competition, the professor picked white and made the first move. “Take THAT you cold calculating MACHINE!” he proclaimed to resounding applause from the audience. That pissed me off. I had been championing a feature where the robot would insult and taunt its opponent with every move, but was shouted down for not being classy enough. We had to endure a night of insults that could have been combatted and we all admitted in hindsight that we should fought fire with fire. At first, everything looked like it was going well. The robot was working, it could see what was happening, and the chess program was making good decisions. We started to relax a little, and the audience was clearly intrigued as the gantry crane would slowly move to the right spot, lower its arm, then grab the piece and move it.
Then disaster struck. Just a few moves in, our opponent performed an En Passant which while being a rather esoteric move, also has the rather interesting distinction as Wikipedia puts it, as “the only occasion in chess in which a piece captures but does not move to the square of the captured piece.” Uh oh. Our vision system was designed to look for what changed and was looking for either a move or a capture, not a weird combination of both. Our program chose the two best coordinates to send which caused the chess program to have the wrong board information, and we watched in horror as the gantry crane moved into the wrong position, knocking over several pieces as it vainly swiped in air to make its move. “Uh oh, Houston, LOOKS LIKE WE HAVE A PROBLEM!” yelled our opponent as the crowd roared its approval, and he immediately launched into a discussion about how pure logic will never overcome the malleability of the human mind. It was chaos as we raced to reset the board, recalibrate the vision system, and instruct the chess program of the true state of the game. I believe we had to simply reset the game state completely, in essence telling the computer that the game had started with the board situated the way it was since it didn’t understand an En Passant. Things were looking up as the computer managed to make a move and actually captured a pawn to the crowd’s approval.
For the next fifteen or so moves we each took turns giving a short talk about how the robot was built, the technologies we used, and the challenges we overcame. As the evening wore on, the game developed nicely. I’m not a chess player but I could tell that the computer was holding its own and it could already beat everyone in the class, so we were optimistic about our chances. Still, it was roughly an hour into the game and even though I wasn’t following every move closely it was a surprise when all of a sudden the professor announced “Checkmate!” I looked over at the board and it was indeed checkmate! However, the computer didn’t agree. Turns out that either during the initial setup of the board or during the insane scramble to fix our En Passant adventure, someone had incorrectly placed the King and Queen on the computer’s side. The computer had been defending its queen instead of its king, and thus had fallen prematurely. A disappointing end to what would end up being one of the most memorable projects of my college career. Still, we couldn’t help but feel proud of being able to build something unique and stage a public spectacle of sorts that everyone seemed to enjoy. Tomorrow as I tune into watch Watson and the team from IBM, I have a small sense of how they’ll be feeling, and win or lose (there are already rumors leaking out as to the results) it is an incredible achievement. Congratulations and good luck!
Just spent a very informative and interesting day at 10gens’ Mongo Atlanta conference (#MongoATL Hashtag). For a one-day conference, the event seemed to be very well planned and executed, and I feel like the event (and it’s sister events) is a great one to attend if you’re using or planning on using MongoDB.Held in the extremely nice Georgia Tech Research Institute Conference Center, the event consisted of several speakers talking about how they’re using MongoDB, or how to tune and administrate the database. These were fairly technical talks as most had code up during the presentations or were running commands from an interactive shell. The audience participated eagerly during most talks and the questions at the end of the talks were all pretty good.Closing out with a couple of random thoughts:
- It took awhile for the wifi to work, but they got it straightened out after about an hour.
- There was never enough coffee on hand.
- The swag was really nice: T-shirt, mug, and stickers. All of it was branded Mongo and not 10gen, which I felt was a classy touch.
- Most presenters tweeted URLs to their talks within minutes after they gave their presentations.
- It was cool to see 10gen’s CEO up there giving a lecture on how to administrate Mongo and how sharding works.
- Every single presenter used Keynote. This might have been the single most surprising thing to me.
- It was really refreshing to be at a very technical conference. Unlike most of the healthcare conferences I go to, this one spent quite a bit of time showing code examples, answering very technical questions, etc.
All in all a great experience.