Like every software organization, we have trouble getting our developers how to think like users. It’s a time consuming process, and when you’re in an industry like healthcare, it can often be extremely difficult to visualize exactly how a product is going to be used or how a process might affect a user. You’ve certainly never been to med school, or worked in a pharmacy. These things are utterly foreign. However, visualizing a user’s workflow or responsibilities is often the difference between a mediocre product and a great product.One method that I’ve found to be illuminating to our engineers is to restate every message, function, or process in terms of a bank, with them playing the user character. Instead of the error message being “Purchase Order Alert: We’re sorry, but there were one or more items missing from your recent purchase, click here for more details“, rephrase it as “ATM Deposit Alert: We’re sorry, but there were one or more checks missing from your recent deposit, click here for more details.“Makes a difference doesn’t it? Now our verbose, overly-polite alert text seems almost ridiculous. Tell me what happened with my checks! Don’t make me click through for more detail! Especially if I’m on a mobile device! That’s my money and it’s important!“ATM: 1 check for $100.00 was returned.”That’s much better. I feel notified and in control as a user, and it was short and sweet. Very cellphone friendly.“Order 123: 1 item shorted: Tylenol 20MG”The point is that users actually use the products we build. Things like error messages and process flows are important. As developers, we often think of these things as control points, or logic trees, and don’t stop and relate what’s going on from a user’s standpoint to any of the important systems or applications that we ourselves use. The bank analogy should help you get in the habit of stepping aside and thinking outside your code for a bit because it pulls the process or message into the realm of your experience.Even the process of constructing the analogy can be very instructive. If you can’t quickly pull together an analogy that describes what you’re doing in terms of your own life, you probably have no clue what you’re doing. In other words, you’re probably doing it wrong.Next time you’re solving a problem, try restating it using concepts you’re familiar with in your own life. You might be surprised at how useful this technique can be.Updated: Sunday, 12/5/2010Today’s New York Times Magazine had an article on Jamie Dimon (I think it requires registration) that talked about his management and conversational style. One of his favorite things to do was equate banking principles back to ordinary life. Seems like we’ve got company on the other side of the fence too!
A lot has changed over the last few years. It seems like forever ago, and yet, it was only in 2005 that AJAX sprang forth and ushered in the buzzword of Web 2.0. And it’s great – rich applications that are delivered quickly and efficiently allow me to do things online that I never thought possible. And yet, there’s a dark side to the Web 2.0 craze for APIs and tools and importing and exporting data, and that is that we’ve taught our users to embrace man-in-the-middle attacks. Every time I see a website asking me for my Facebook password I cringe, but nothing pales in comparison to the nightmare that is Mint.com.
I love Mint.com. They have spectacular visual design, a great product, an entertaining and informative blog, and a great iPhone app. I know tons of people who love Mint.com, and yet, when surveying my digital life with a critical eye, I know of no greater security risk than Mint.com. It’s still astounding to me that Mint could grow from a small startup to being acquired by Intuit in the space of a few years and essentially retain unlimited liability by storing user’s logins and passwords to their entire financial lives. Yikes.
If I were turned to the dark side, I would immediately attempt to hit Mint for their millions of users credentials which provide me completely unfettered access to their accounts, most of which are not FDIC insured. This means that when someone hacks Mint, they’ll be able to pull out all of my money, transfer it, etc., and I’ll be responsible because from the financial institution’s perspective they aren’t liable for me entrusting my credentials to a third party. Sure, Mint encrypts their password database, but somewhere that password is known or stored. It has to be because they have to use my unencrypted credentials to login. Sure, there are a bunch of ways they could monitor this access and mitigate risk, but at the end of the day there are usernames and passwords floating away.
There is simply no technical reason the financials institutions out there can’t work with Mint and every other API providers/consumers out there can’t implement an OAuth authentication solution. For the nontechnical of us who are reading this, an OAuth solution is essentially a token based method of authentication. A key based authentication mechanism doesn’t necessitate handing over your username and password to a third party, instead, you grant a key (and depending on the API, limited access) to Mint which can then login and grab the information you need. If Mint gets compromised, your financial details might be stolen, but at least they can’t access the upstream account with the same type of access. In fact, this is what I was really hoping would come out from the Intuit acquisition: for Quicken you used to have your financial institution give you a separate login or key for Quicken specifically. To be clear: this is originally the financial institution’s problem. They should be providing OAuth based services for Mint and others to consume. However, this has now become Mint’s problem to address. Also, hindsight is 20-20. What may have started out as a great application for a developer to track his personal finances with an acceptable risk quotient has ballooned into one of the largest and best avenues for tracking finances in the world.
The simple fact is that today, when you change your financial institution credentials, Mint breaks, which means they’re scraping the content from financial institutions. Financial institutions are in on it too – it should be easy to see that a large percentage of their traffic is coming from one domain. Even those sites that use a two-factor “security questions” approach are accessed via Mint by saving all possible security questions! Financial institutions could easily block Mint by adding CAPTCHAs to their login protocols, but since I personally know several users who have changed banks to use Mint, my guess is there’s sufficient pressure to maintain Mint’s access. Some might say that I’m being overly paranoid because we’re used to saving usernames and passwords on our local machines and while it is true that from a direct comparison perspective Mint probably has the security edge over my Macbook Pro, from a risk management perspective it’s quite a different story. All of a sudden it pays to hire a team of evil programmers for a million bucks to gain access to Mint’s millions of users. Consider too the fact that most people re-use a single username or password as much as possible – this means cracking a lower security database (a web forum, etc.) can leapfrog access into those same user’s Mint accounts. The less we’re using usernames and passwords for services, the better. What’s the solution? I think a three pronged approach should be considered by any modern technical service that holds data of value:
- Institutions should provide rich APIs in the first place and aggressively prevent screen scraping.
- APIs should clearly segregate between “read only” and “read and write” access levels. Mint.com can “read” my financial data but can’t “write” and pull money out of my account for example. API access could further be segmented to only allow access to pieces of data (e.g. financial sums only and not transactions, or both, etc.)
- APIs should use account credentials for access, but instead should be key or token based.
This might sound complicated, but in practice it’s very straightforward. I simply login to authorize a request made by an application (anyone authorized a Netflix device recently?) and that’s it. In an increasingly networked world, application service providers bear increased responsibility to provide safe computing to users. The old standards of storing usernames and passwords within applications need to change to reflect a different risk model. This means both providers (financial institutions) and consumers (Mint.com) of data. I want to use Mint and recommend it to others so I’m hoping that they can bring their clout to bear and work things out with financial institutions to solve this problem.
At Sentry Data Systems, we have a very distributed technology organization. The majority of our technical staff does not work from our Deerfield Beach headquarters. Instead, we have our developers, implementation staff, tech support, and infrastructure personnel spread out across the country, and even a satellite office located in the midwest. Everyone is an employee, and we don’t do any offshoring, but we are most certainly not geographically close to each other.If you’d asked me five years ago if I thought this would be a good approach to take, I would have rather emphatically told you no. In fact, I resisted it pretty strenuously for quite a while. You had to be a senior developer, having spent significant time on site (at least a year), and working remote was a reserved privilege. While we had a few guys working remotely, it wasn’t the majority you see, so Bad Things couldn’t happen, but we still had folks dialing in right from the beginning. And yet, in hindsight, it may be one of the factors that helps us squeeze more productivity out of our staff, helps them produce higher quality code, and allows us to get the leg up on competition.For starters, it forced us extremely early on to invest in systems, processes, and a way of working that brought everything we did online. Project management, change control, bug tracking, issue tracking, source control, testing, collaboration, documentation, document management, communication, all of these things needed to be ubiquitous and consistently used by the entire staff. If things weren’t accessible online, that meant Bob in Utah wasn’t going to be able to contribute, learn, participate, or even know about it.The second major factor that a distributed team gives us is a national recruiting footprint. We’re not just going up against Acme Software in our back yard down here in Fort Lauderdale (South Florida has its own disadvantages for hiring technology workers), we’re getting to compete for the top talent across the US in every job market. Our pool of potential applicants increases by an order of magnitude or more, which really amps up the talent level and allows us to be super picky.Third, I recently came across this article recently which was discussing some research from Microsoft, exploring traditional myths about Software Development, and they touched on the fact that distributed teams in their experience don’t have a negative impact on team performance. They rightly point out that this flies in the face of a “one of the most cherished beliefs of software development” but they also illustrate how any worker would much rather talk to someone knowledgeable on their team 4,000 miles away than a less knowledgeable guy next door. Makes sense, and it jives with our experience as well, but I can’t say I expected this outcome at first.Are there drawbacks? Sure. It’s nice to have everyone over for a barbeque on a long weekend, and that can’t happen. It’s fun to walk by and joke with everyone while making the rounds in the morning, and that’s harder to do, but we still manage to interact a good deal as a team. The flip side is it’s nice for the remote guys to be able to live where they want, stay in touch with family and friends, and yet still have a great job at a fun company. This really contributes to retention – we’ve had several guys move several times in the last few years, which I count as a “save” on losing an employee each time.If you’re considering running your organization’s software teams in a distributed fashion, here’s some things you’ll want to make sure you’ve got covered:
- Excellent communication methods: cell phones, VoIP phones for extension dialing off the corporate network, private instant messaging network, email, and more.
- Organizational Discipline: People in the organization need to understand that they will often be interacting with remote individuals, and that they can’t cherry pick projects to those who are in the office. Yes, a phone call is not as nice as face-to-face, but often it’s more productive.
- Team-Based Activities are Still Key: This is an easy one for us. We play video games every Friday afternoon/evening. Combination of shooters (Team Fortress 2) and other games (DoTA and HoN) and the games are part of the employee start up paperwork.
- Everything Must be Online: Bug tracking, brainstorming, documentation, everything. A major advantage this gives you is it’s a head start on preparing for audits or other certifications (SAS70, etc.) you might need to complete as an organization as everything will be easily accessible.
- You Still Need to Be Involved: If you like to walk around and say hi to everyone each day like I do in the office, you still need to do it “online” via instant messenger or phone call.
- Figure out if a Satellite Office Makes Sense: We found that we had roughly 5 people clustered in one city, so we sprung for a satellite office. It’s a cheap thing to do and helps our recruiting in that area.
- One Timezone: We work on US Eastern time. You can live where you want, but you’re going to work that timezone. This is critical, in my opinion and while it does mean the guys in California are up at 5AM, it’s not the end of the world and really helps keep things simple from a scheduling and planning perspective, and maintains the ability for quick communication.
It probably isn’t for every organization, but it’s really worked out for us, and it’s definitely something we’ve grown organically and will continue to improve.
Spent a good portion of time on this over the weekend, and it turned out to be frustrating enough and the answers available incomplete enough for me to want to document this here briefly.I wanted to be able to populate my Django models from an external script by simply calling MyObject.save() . My particular case was scraping some information off of a website using BeautifulSoup then inserting it into my Django application, where it would show up and be editable from the admin section. It turned out that I had a major problem with environment variables, so after a lot of reading and some trial and error, the following should work for you.
import osimport syssys.path.append('/path/to/the/directory/above/your/project')os.environ['DJANGO_SETTINGS_MODULE'] = 'yourproject.settings'from yourproject.yourapp.models import YourModel
To be a little more clear: f your project is in “/Users/username/myproject”, append “/Users/username” to your system path. This clears up your environment so that when you set the location of your django settings file using the dot notation, it knows where to look.As always, if you have any questions or comments or a better way to do this, let me know. I’m using this on Snow Leopard with a trunk SVN checkout of Django.