Monday, December 26, 2011

The GitHub Job Interview

One interview is never enough
to find if the candidate's up to snuff
takes two or three more
job interviews before
you can tell if the CV's a bluff
How the hell do you interview a start-up CTO? Finding software developers is hard enough, but a start-up founder can't take any chances when it comes to hiring developer #1. So, what, five interviews? How much can you talk to someone before you know what's like to work with them?

It's apparent to anyone who's done this that the marginal value of yet another interview approaches nada after two or three sessions. Some people are just good at being interviewed and can charm the quark off a hadron, but when you start working together all kinds of stuff rises to the surface. None of this is news to experienced hiring managers. Everybody knows that working together with someone is the ultimate job interview. If only you could try people out before hiring...

Bad Solution

Some hiring managers try to do that. Sort of. They hand out programming exercises and expect candidates to not mind wasting their time developing nonsense code just for the sake of the interview. This sucks. I never do that, and I'd never play along if someone asked me to. It's plain unfair. Worse, however, are those that actually offload real work onto candidates. Yea, that happens. People are expected to work for free before being hired. 

You might say, just hire. Let someone work "on probation" for a short period and if it works out, great. If not.. well, it doesn't really work out that great for them. Being rejected after two weeks of work doesn't look good on anyone's CV. Moreover, the time you have to spend with new employees getting them up to speed with your code-base, development guidelines, the product and so on is just too much of an investment. And at the end you're likely facing a disgruntled ex-employee with access to your source-control system. Yikes.

Awesome Solution

That's why I'm advocating the GitHub job interview. Open-Source projects are a fantastic way to collaborate with people you don't know too well. And GitHub in particular, with its ease of forking and pull-requests is just the best (and biggest) platform for open-source collaboration.

Here's what you do. You come up with a cool idea of an open-source project. This becomes your company's development sandbox. Candidates are asked to then contribute to the project in some way. You want to see them code? Ask them to develop a module. You want to see them tackle a bug? Ask them to choose one from the bug-list. This works for every aspect of development work. You can design features together. You can gauge their communication skills. You can see how well they handle reviews. You can ask them to document their work and see how well they can write. But above all, you're not taking advantage of anyone, and true developers probably won't mind investing time into an open-source effort. 

Choose your GitHub project wisely. It should be something relatively fun. It ought to use the same technology stack your company uses. And it should be relatively simple to grasp, because the point is not to be investing too much time training people you're not yet hiring.

Not all developers have existing online projects they can point to when you're interviewing them.  So make one for them. You avoid hiring and firing. You reduce risk by investing less time and not exposing your IP to candidates.

This is what Gignatt will be doing, now that we're hiring our first employees. Stay tuned for details on our open-source project.

Monday, November 14, 2011

A Self-Referential Demo Video: How To Plan Projects with Gigantt

We've been getting lots of helpful feedback from our beta users, and one question came up a few times: how do we plan a project?

That's actually a great question. While using the application may be very easy, it's not often clear, when faced with a blank slate, how to start planning a project. 

So we made this short instructional video. It's three-minutes in length, but it tries to cover a lot. Not only how to get started, but also best practices and techniques we've been learning by using Gigantt ourselves.

The video is doubly awesome because it's self-referential. You'll have to see what that means yourselves...

Don't forget our ever-growing help wiki.

Wednesday, November 9, 2011

Waze's Next Killer Feature

This post is completely off-topic. It's not about project planning. It's about the best damn GPS navigation software out there today - Waze.

Waze is awesome because it uses information gathered from all its users' smart phones to learn where traffic is jammed and route drivers accordingly using the quickest route.

But there's one thing I always miss when using Waze - it doesn't tell me where I can park. Sure, adding a parking lot layer over the map is sort of a solution. Many competing products do this. But in some places [cough]Tel Aviv![/cough] this is simply not enough because there are waaay too few parking lots.

So here's what Waze's next killer feature could be: crowd-sourced parking. As a user I would definitely pay money to know where the closest publicly available parking spot it. Waze could set up a whole economy of parking spots: when you leave a spot you "check out" with Waze. If you're looking for a parking spot you hit the P button and Waze directs you to the nearest one. Users could accumulate parking credits when they check out and another Waze user takes their place.

I hope somebody there reads this.
Anywayz, Gigantt ♥ Waze.

Monday, October 31, 2011

Cheat Sheet

All you really need to know to get started with Gigantt is ENTER for a new task and INSERT for a new child task. But power users quickly become thirsty for more keyboard shortcuts, and Gigantt has one for every possible action.
Gigantt's knowledgebase now contains a keyboard shortcut cheat sheet you can download and print.

Friday, October 21, 2011

Gigantt Presenting at TechAviv

Gigantt will be on stage Wednesday at the next TechAviv Founders Club conference.
Come watch and mingle - places are limited, so better register now.

Friday, October 14, 2011


Gigantt's Help web site is now up. It's in the form of a wiki. A true wiki; meaning anyone can edit it.

Right now it contains mostly articles explaining the existing features, keyboard shortcuts, FAQs, that kind of stuff. But I do welcome contributions from users.

Saturday, October 8, 2011

Team Edition

Today, after way too many weeks of testing and improvements, Gigantt's Team Edition is live. You can finally assign tasks to people (or other resources) and enjoy Gigantt's smart goal-based task scheduling.

Here's what the new version looks like: 

A few new things to notice. All of these tasks are assigned to Cosmo Kramer, as evident by the watermark on the bottom-left corner. It tells us that Kramer is in charge of this part of the plan, and that any new task created here will be assigned to him by default.

If we decide to add another task here and assign it to George Costanza, we'll see his name on the left of the task:

All the other tasks belong to Kramer - the default owner of tasks in this area - so there's no point in showing his name beside each and every task. That's why only George's name is shown.

To reassign a task just click on the left side-panel and you'll see this popup where you can select any team member:

You can also reach it with the keyboard - Tab will reach the button and Enter will open the panel.

The start & finish dates of each task is shown in the time-bar above:

There's now a wider range of estimates that you can choose for each task, including even a one-minute estimate (useful for check-lists). The estimate panel has an array of buttons so that estimating a task is just one click for ultra-fastness:

Important note: as part of the beta-ness of this version there's still no UI to add/remove members from your team. That's in the next version. No worries, though, just write to and we'll do it for you.

Friday, August 12, 2011

Prioritizing Goals in Gigantt

Should we invest time in infrastructure or focus on shipping the next big feature? Should the QA team work on "Product A" or "Product B"?

Managing a complex work plan that spans multiple projects and teams means always being able to decide what's more important. And there's always a decision to be made. Telling your development team to focus 35% of their time on one project and the rest on another is a sure recipe for both projects not finishing on time. Keep it simple, break the plan into smaller chunks and decide what's more important.

This is how Gigantt tackles prioritization. Instead of maintaining complex resource allocation schemes, users of Gigantt define goals and prioritize them. A goal in Gigantt is just any task in the work plan. When you mark a task as a goal, Gigantt analyzes the entire work plan and considers all the tasks that have to be completed for the goal to be reached. For you nerds, it's like a reverse DFS of the work-plan graph. Okay, enough nerd talk, let's see an example.

Kramerica Industries has two products: Widget and Cajiggre, and there's a work-plan for a few weeks ahead.

Jerry is in charge of Widget and Kramer is in charge of Cajiggre, and as long as that's the case there's not much to prioritize. Work happens independently and in parallel. Things become more interesting when there are shared resources between products. So let's throw George into the mix as QA Manager. George has to test each version of each product before it can be shipped, and there's only one George... Let's see how each product's plan looks now.

Jerry's plan looks OK. He does the development and George does the testing (and shipping) right after each version is finished:

But Kramer's plan has a few delays in it. For example, George is busy testing Widget 1.0 when he could be testing Cajiggre 2.0:

At which point a decision has to be made, which product is more important. Since Widget appears before Cajiggre in the work plan, Gigantt will automatically give it preference when assigning resources. Whatever appears on top is considered higher priority. This is true for simple tasks as well as for whole projects and it's the simplest way to prioritize in Gigantt. But what if Cajiggre 2.0 is actually a top priority for the company? We need finer-grained control over priorities, not just between products but also between individual versions of each one. We achieve this by marking all of the ship tasks as goals.

Just toggle the little star icon for every task that should be considered a goal:

If we then click on (prioritize) we see an ordered table of all the goals in our work plan:

If we drag "Ship 2.0" to the top, as the video below shows, all the tasks leading up to "Ship 2.0" get rescheduled so that "Ship 2.0" completes as early as possible.

Same thing can be done with individual tasks, not just entire versions of the product. Since a goal is defined as everything that must happen in order for a given task to finish it's entirely possible to define multiple goals in the same projects. Notice that goals can also overlap (share tasks).
first task is shared by both goals and thus has top priority (red)
Whenever there's any contention about a shared resource, managers can easily define any task as a goal and prioritize it to settle the dispute. They can also play around with priorities to see how much it effects the finish dates of each goal. Instead of allocating resources in advance to various teams and then fighting over who "owns" the lab this week, only goals are prioritized and everything else is derived from that. Prioritizing goals is less confrontational than prioritizing teams or entire products. 
What I like best about Gigantt's goals is how clearly the organization's priorities are conveyed to the teams. It's hard to get confused by that table above. Everybody knows where their efforts should be focused.

Sunday, July 24, 2011

A Duck!

I believe it was the comedy genius John Cleese who once said that the funniest animal is the duck.

Today I took a day off from working on resource scheduling algorithms and decided to write a fun little tool that I've been needing. So, ever needed to copy paste some text from your PC to your iPhone? Sending yourself emails is so lame. I wanted something that was fast, requires no registration and nothing to install on my mobile phone.

I give you Copy Duck:

You just paste stuff, take a snapshot of the QR-code, and then you see the same text on your mobile. Tested it on Chrome/iPhone but it should work on any modern smartphone (if you have a QR scanner app).

For the geeks: built with Google App Engine using Python, jQuery and Google Chart API.

Thursday, June 23, 2011

Keyboard vs. Mouse

When it comes to speed nothing beats keyboard shortcuts. First time users of Gigantt often start out by using the mouse quite a lot: clicking on buttons, zooming in and out with the mouse wheel. But once they learn the keyboard shortcut alternatives they usually adopt them and don't look back. Ten fingers are faster than two fingers.

Keyboard shortcuts also have the advantage of not cluttering up the screen with endless buttons and menus. But there's a UX price to pay - a steeper learning curve. For everything but the most obvious and intuitive keyboard shortcuts, like ESC (cancel), Ctrl+C (copy), there's no getting around the fact that users probably won't be able to guess which key-combo corresponds to which feature.

We still try our best to make sure keyboard shortcuts are as intuitive as possible. For example:

D - Mark a task as done
I - Implode a few tasks into one complex task
E - Explode a complex task into several tasks (the opposite of implode)
L - Create a link between tasks
DEL - deletes anything selected
INSERT - inserts a new task.

See? It all makes sense...

Until now nearly all mouse operations had keyboard alternatives. I say nearly because there's one thing you still couldn't do: select and manipulate arrows between tasks. Now you can. 

The Up & Down keys still navigate between tasks according to their vertical order. If you want to select the arrow into the current task, just hit Left. You can then delete the arrow with DEL, insert an intermediate task with INSERT, and so on. If there are more than one arrow you can move between arrows with Up/Down as well.

These new shortcut solve a painful UX problem: switching input modes. No matter which you prefer, mouse or keyboard, having to constantly switch between them is a real speed bump. So rejoice, fellow typists! From this day on no feature is beyond your keyboard's reach.

Saturday, June 18, 2011

Collaboration in Gigantt

Today we're happy to release the first collaborative version of Gigantt.

What's changed?


Multiple users can now edit the same plan at the same time. It's also cool for the same user to edit the plan in multiple browser windows. However, Gigantt still doesn't support managing multiple resources (i.e. a team). That's our next major feature.
So what happens if two people make conflicting changes at the exact same time? Gigantt helps you avoid this problem by featuring a "freeze period". Basically, any task you modify and its immediate vicinity get frozen for 60 seconds. During this period only you can edit these tasks. Others have to wait for them to "thaw". Frozen items have a snowflake icon. If you hover over one you'll see who has been editing it and how long until it thaws out.

We think this approach to collaboration is better than simply letting people step on each other's toes and then asking them to resolve their conflicting changes.

Visual Clipboard

We got quite a lot of feedback about the way copy, cut & paste wasn't so intuitive. Now it's just like in Windows explorer, but you also get a visual clipboard window that shows you the contents of the clipboard.

The clipboard is also used for creating connections between remote items. To create a new link you select the source task and hit "L" (or the new link button). The task will be added to the clipboard in _linking_ mode. You then
select the target task and paste. That's it.


We won't bore you with all the details, but Gigantt is now wicked fast. Navigation is much quicker, even for very complex tasks, because sub-tasks are now rendered in the background (they sort of fade in). Auto-save is also much faster and so is every editing operation.

There are tons more improvements and fixes. But I'll leave it to you to find.

Keep sending us your feedback. It's really valuable to us and we do act on it.

Sunday, June 5, 2011

T-Shirt Sized Estimates

How long would it take you to write a Hello World program in a language you've never used? Minutes, probably. But how many minutes? three? nine? Asking for such precise estimates just doesn't feel right, does it? But if you pose the question like this "Would it take you less than 15 minutes?" it suddenly seems like an entirely reasonable question.

This, in a nutshell, is why T-shirt sized estimates work. There are other reasons, as well, but before we dive into them let's review what T-shirt sized estimates (TSE) are.

Agile development methodology has made the concept of TSE rather popular in recent years. The idea is to give rough estimates to tasks in the form of T-shirt sizes. Small, medium, large, extra-large and so on; you get the point. But what does "small" mean, really? This is pretty open to interpretation and each team may define these sizes in its own terms. Some associate a duration range with each size. e.g. S is less than one hour. M is between one and three hours. Some associate T-shirt sizes with points that are later added to an aggregate estimate. But in essence the idea is simple: instead of arguing over how many minutes or hours a task is going to take, let's just agree that it's small and move on. 

Now there's a hidden aspect to this method. The sizes are limited and have an upper bound. No T-shirt is XXXXXXXXL. This is significant. People are terrible estimators of large efforts. We're less terrible with reasonably sized efforts. So using TSE doesn't mean we don't invest any real effort into estimating and just spit-ball "2 years!". It only means we're not fooling ourselves into thinking that giving a very specific estimate would improve our project's total estimate in any way.

In fact, overly specific estimates are detrimental to delivery on time. Estimation is a well researched area in psychology. People aren't just bad at estimating, they're also terrible at knowing how bad their estimate really are. That's why the vast majority of people consider their own intelligence or attractiveness to be above average (which is plainly impossible). Ask people to give a 90% certainty estimate and they'll consistently give a too precise answer. A well known experiment has been widely replicated in many classrooms. Students are asked when queen Victoria was born, and they need to give their answer in the form of a range of years (e.g. 1800-1900) that they consider to be 90% certain to be true. The goal of this experiment is to see how well people estimate their own ignorance/knowledge. Statistically, you would expect students to specify a range so wide that only 10% of the students would actually miss the correct year. But instead a much larger portion of the audience gives a wrong answer by selecting a too-narrow range. We're overconfident in our knowledge and underestimate our ignorance. 

When you think about it, efforts estimates are really a form of self-estimation (uh-oh!). People are asked how long it would take them to finish a task. That's why most estimates are overly optimistic. It's tied to a host of psychological factors: over-estimating our own capabilities and skills, wanting to please our superiors (and ourselves) with a smaller estimate, ignoring potential unexpected surprises along the way, etc. 

Here's how TSE solve these problems. 

First, if you force people to round their answers they're more likely to factor some uncertainty into it. If you internally estimate a task at 40 minutes, but are then asked to choose between a 30 or 60 minute estimate, you'll suddenly remind yourself that 40 minutes doesn't really take into account various overheads or possible interruptions. And it's much easier to justify an over-estimation to yourself when it's someone else who's forcing you to round it up. Rounding up on a small scale is a good thing.

Second, by limiting yourself to small estimates you avoid those big vague predictions which are the root of all evil. If there's no T-shirt size for 3 months, then you have no choice but to actually break your estimate down to smaller chunks. This, in turns, forces you to invest just a bit more time into planning ahead than you might have otherwise. Suddenly you're reminded of holidays and hiring efforts, and integration, and setup time... This, of course, is tied to the first rule of project underestimation: it's not the estimates that are wrong, it's the plan that's incomplete. 
If you allow yourself overly specific large estimates you also introduce a mental block: you've checked the box. It's "done". You wrote down "4 months" as your estimate for that distant milestone and now you don't have to look at it again until you reach that task. How convenient. If, on the other hand, you only allow yourself to provide smaller estimates, that milestone is now sitting there taunting you, begging to be further elaborated. You may be comfortable with a large estimate that's not based on reality, but if the aggregate estimate is small because you haven't yet taken the time to drill down even a bit to day-scale, well... that's just unacceptable. 

Third, it's harder to argue with TSE. Which is another way of saying it's easier to accept a TSE. When your manager sees you've created a well thought out plan in advance that tries to capture all those details that normal intuition misses, he's not going to haggle with you over that one-week estimate for integration time ("one week? no no... it's 4 days at most!"). That's just not going to matter. Everybody knows we're not that good at estimating in that scale, anyway, so as long as an estimate isn't egregiously wrong people just move on. The discussion is suddenly focused on the content of your work plan, instead of the price tag attached to each task. Now the challenge  is "who can think of more shit that can go wrong, that you forgot to include" instead of "who thinks this feature can be developed in less time". 

I believe TSE needs to be the default method for estimating projects, as long as they're restricted to small sizes. Gigantt uses a variation of TSE. Our estimates aren't actually S/M/L/XL. They are currently: 1h/3h/5h/1d/3d/1w. Estimating a task is a one-click operation. This really reduces the friction of estimating (something people just don't like doing). It's always going to be the preferred, easiest way of estimating in Gigantt. In future versions we'll also add 0-duration estimates, for checklist type tasks (e.g. milestones). We may even add custom estimates (e.g. "4 months"), because we realize not everybody shares our above views on how to properly estimate projects and alternative project management solutions do offer this feature ubiquitously. But even if we do it's certainly going to be the 2nd choice, and we hope most users won't take advantage of this feature at all.

Friday, May 13, 2011

Ants, Popcorn & Sex – An Introduction To Resource Leveling Algorithms

What would you do if you were asked to plan the construction of a building? Construction projects are typically big, involving many people and resources. They're also time-sensitive - you have exactly one week to build each floor, and the material for the next one needs to arrive just in time. Cement won't wait.

Since the 50's various methodologies have been developed to tackle the issue of creating a work-plan that result in a smooth-running project, with as few costly delays as possible and without wasting precious resources. Most notable in the construction industry is the Critical Path method. But while CPM may help you create a logically coherent plan that finishes on time, it doesn't really guarantee you that your plan is optimal in any way. What if the whole thing could be finished two months earlier? What if it could cost a lot less money simply by better utilizing the same resources? These are big projects with big money that needs to be spent correctly.

This brings us to the concept of resource leveling. It's easiest to describe what resource leveling is by giving examples of projects running without it. For example:
  • Workers are standing around idly because it's not their turn to use some machine.
  • Lots of people are hired at the beginning of the project and then need to be fired when they're not needed.
It's all about making the best use of limited resources. Ideally, you'd want to hire a fixed number of people and employ them fully for the entire duration of the project. When that's the case your project is optimally planned for the amount of resources it may expend.

To better illustrate what resource leveling “looks like” resource histograms are often used. It's just a graph with two axes: time and resources. The horizontal axis is usually work days and the vertical axis is the amount of resources used (e.g. man hours).

Here's a terrible one where it's easy to see that resources aren't used evenly throughout the project.

Resource leveling is all about getting that histogram to have a rectangular shape. If all of your workers are constantly employed then there's not much to improve - you have a perfectly leveled plan.

It is thus the process of moving tasks around, while still maintaining their logical order of dependence, in a way that maximizes the use of your available resources. Resource leveling algorithms differ mainly in the way in which you choose which task to move where.

Turns out it's a hard problem. NP-hard, to be exact. In other words, you can't simply try all feasible task arrangements until you find the optimal one. Unless of course your project is tiny, trying to exhaust every possible solution to the problem generally requires an exponential running time, and your deadline is probably before the next ice-age, so no luck there.

The problem is known as the resource constrained scheduling problem (RCSP). In a way it is a generalization of the traveling salesman problem (TSP). There are tons of very clever algorithms that try to solve these problems by approximating the optimal solution. They differ in their running time and how close they get to an optimal solution. Some of them are pretty neat. Let's review them.

Heuristic Algorithms

This family of algorithms tries to use rules of thumb to make decisions. Remember, the decision a RCSP algorithm has to make is which job to shift in time in order to evenly distribute resources. Not every job can be moved. Some have to happen at a very specific time and have many successive jobs depending on their completion. But there's always some degree of freedom (or float) in the plan, and that's the space in which these algorithm mostly operate.

An example of a heuristic is the following rule: choose the job with the most float from all the jobs in the last sequence step of the plan (i.e. it's final stages), and move it to a location where it has the biggest positive impact on the plan's “levelness”. Do this repeatedly, constantly measuring how leveled the resource histogram is at every step, until you can improve no more.

It's a relatively simple heuristic - too simple, really. Think about it; why should it find the optimal solution? It scans the work plan from end to beginning, greedily shifting jobs as much as it can. Why start at the end? Why not shift the smallest job instead of the one with the most float? It might very well make some unfortunate early decisions that would prevent it from reaching the optimal solution.

Most heuristic algorithms are considerably cleverer than the one above, but they've long been considered insufficient since it's not hard to find negative examples where they produce really poor results.

Metaheuristic Algorithms

Instead of following a rule of thumb when searching the solution space, metaheuristics often try to take cues from various natural processes in trying to figure out their search strategy. How do ants “know” the shortest path from their colony to food? How do organisms “know” how to adapt to their environment? Let's see how the answers to these questions can reveal interesting solutions to RCSP.

Genetic Algorithms

Which job shall we shift next and where do we shift it to? Deciding on this question over and over forms a path through the solution space. Our goal is to find the global optima – the place in which the histogram is rectangular (i.e. has a minimal moment, or in other words the least amount of fluctuations in resources). If we measure how leveled a plan is by some function F (for Fluctuations), we can describe our effort as trying to find the lowest point in the solution space, where F reaches a global minimum. The solution space usually has many dimensions, but for the sake of illustration let's think of it as a 3D terrain. Your goal is to reach the deepest valley, but you can't look ahead more than one step. How do you decide which way to go? Where do you even start?

Instead of trying to solve the problem directly (as the heuristic algorithm above) metaheuristic algorithms are good at taking an existing solution and improving it iteratively. Genetic algorithms belong to this family. They start out by creating a few solutions somewhat at random. Each solution is encoded as a “DNA” sequence. For example, if the path a solution takes is “left, right, straight, straight” then its DNA can be encoded as “LRSS”. Each solution's fitness (how well it levels the histogram) is measured and the best ones are mated with each other to produce offsprings that share half of each parent's DNA at random. Then the process is repeated with the next generation. By adding randomness in the form of mutations we can prevent the algorithm from fixating on a sub-optimal path for too long. Mutations drive the algorithm to explore new paths, and getting rid of the unfit drives the algorithm to improve the promising paths.

A major challenge of such metaheuristics is, indeed, not getting stuck in local optima. A cool way to overcome this challenge is called simulated annealing.

Simulated Annealing

Imagine we're trying to find the deepest valley in the terrain pictured above by scattering a few popcorn kernels from above and letting them fall into the valleys. You want at least one of these kernels to reach the deepest valley, but how do you prevent them from getting stuck in a local valley? You turn on the “temperature” of the terrain until they start to pop and jump around slightly, possibly jumping right out of the local valley they might be in. You then turn the temperature back down and let them continue falling. This is similar to how simulated annealing works. Just like a metallurgist heats up a metal and cools it over and over to make it stronger, the simulated annealing approach to RCSP allows the algorithm to explore the solution space by converging on promising paths (cold) but also jumping around to explore nearby paths every once in a while (hot).

Ant Colony Optimization

Ants are smarter than popcorn. Collectively, that is. Individually, not so much. Let's see how ants would find the deepest valley. Ants seek food pretty much at random, at first. They don't have a fantastic sense of smell like dogs, or amazing vision. But once they find food they somehow converge pretty quickly onto the shortest path from their colony to it. It doesn't take them long to form an orderly single file that shoots directly to the food and back. They do this by secreting pheromones.

Each ant secrets a pheromone that causes other ants to want follow it. The more pheromone an ant picks up, the more likely it is to go in its direction. If an ant finds food and travels back and forth on the same path, it secrets more and more pheromone along this path, making it more and more appealing to other ants. And the more ants follow the path it becomes even stronger. It's a feedback loop that makes the ants converge on a path to food. But that's only part of the story. The pheromone also evaporates over time. So what happens if two ants find two separate paths to the same food, but one is longer than the other? The longer path takes more time to travel and as a result the pheromone has more time to evaporate, making it ultimately less attractive to the ants. So ants don't just find food and “tell” each other about it, they also iteratively find the best route to the food by choosing the one with the most pheromone on it.

We can do the same for the RCSP. Start out with a few feasible work plan arrangements and let simulated ants search for improvements. When an ant chooses a modification to the plan that results in a more rectangular histogram (an improvement) we deposit pheromones along the path to that solution. The bigger the improvement, the more pheromone we deposit. Pheromones here are just the probability of an ant following the same path. Ants will still wander off a bit, trying to explore nearby paths, but as the scent becomes stronger they will converge on a good solution. Pheromone evaporation will make sure they choose the shortest path.

There are many more approaches to solving such global optimization problems: artificial neural-networks, packing algorithms, monkey searches, tabu search, bee colony optimization (like ants, but with dancing instead of secreting).
What's neat about all these algorithms, aside from their awesome names, is that they don't require a real understanding of the particular problem at hand. They don't rely on intuition in deciding how to reschedule jobs. They just know how to improve an existing solution.

Tuesday, May 10, 2011

Collaboration in Gigantt

We're in the final testing stages of the first version of Gigantt that supports real-time collaboration. You can see what it's going to be like in our new preview video.

turn on your speakers for full enjoyment. :)

Gigantt takes a novel approach to collaboration. We want to allow multiple users editing the same plan at the same time - seeing each other's changes as they are made (like Google Docs, Wave) - but we don't want people stepping on each other's toes. Our solution is called a freeze period. Basically anything you touch while editing a plan in Gigantt gets frozen for everybody else for 60 seconds. So each user can safely edit his own area of the organization's big work plan.

This approach has the advantage of making it visibly clear when someone else is editing the plan with you, so you can avoid conflicts instead of having to resolve them. We think this is a better form of collaboration than what most other collaborative systems offer, where if two people are editing the same document one of them will be notified of a conflict once he tries to save his changes (e.g. Wikipedia). We believe true collaboration is not racing to save your changes before someone else does, but rather being aware of who's editing alongside you right now and being able to give each other just enough time to edit in peace.

Our apologies, again, for taking so long working on this release. We want to get it right. Many people have signed up for an invite to the private beta program - more than we can handle at this point. But once this collaboration release is finished we're immediately going to start working on the registration/invitation system (now it's all manual) so we can let more people in. Thanks for being patient.

Tuesday, April 26, 2011

Introduction to Mind-Maps

Gigantt draws lots of inspiration from Mind-Maps. They're terribly useful things, not just for brainstorming but also for managing projects. Just though I'd share a great introduction to mind-maps I've read today on InstantShift. Go read it - it's well written.

I've written about mind-maps way back when I started this blog. 

Nothing beats their ability to capture ideas quickly and to manipulate a tree of ideas visually with drag-and-drop. I do like to think of Gigantt is a work-plan oriented collaborative mind-map with support for more complex plan topologies. I think any project management tool that's slower to operate than a mind-map is a waste of time.

Monday, April 11, 2011

Smooth Rendering in Flex

What do you do if the amount of interactive items you give Flex to render makes it choke and drop the frame rate to 0.3? This is something we encountered recently at Gigantt, and I thought I'd share our solution to the problem.

The Problem

So let's say you have a few hundred complex interactive items that you want to add to the stage. In our case we sometimes need to fit a very large number of "task" objects into the screen by scaling them down, because that's part of Gigantt's infinite-zoom approach.
Giving Flex too many items to render at once just chokes it. Flex tries to render them all at the next frame and just doesn't make it. It tries to do something called phased instantiation, but most of the time this seems to do more harm than good. You can end up with a Flex application that, instead of rendering at a smooth - let's say, 25 FPS - pauses every once in a while to render a bunch of items, which results in a really choppy UI experience. This becomes especially clear if you use lots of animation.
To be sure, if you constantly need to render more than Flex can do in 1/25th of a second then you're out of luck. But in many cases rendering happens in bursts. This is because one of the more costlier operations you can do with interactive objects in Flex is to add them to the stage. Once they're staged moving them around and even transforming and scaling them is much quicker.
To top that all off, you never know just how much free CPU your customer's machine is going to have. Maybe they're running a bunch of stuff already and there just isn't enough juice left for Flex. So you can't really decide in advance how many items per second you feel comfortable rendering, because this number will vary between machines.

The Solution

What we need is a way to make sure our application never drops below 25 FPS so that everything feels snappy. In the current development version of Gigantt we ended up implementing something we call an idle rendering queue. In essence what we do is divide all our rendering operations into reasonably sized chunks that are rather small. We then measure the application's actual frame rateAs soon as we detect a high enough frame rate we know that Flex is done rendering whatever it worked hard to render and that the engine is free to handle more work. We then take another chunk out from the queue and feed it to Flex. Our application's declared frame rate is 60 FPS, but it never really reaches that number unless your have a strong CPU or you're not rendering a lot of new items. By throttling our rendering jobs according to what the user's machine is able to handle at each point in time we make sure our application is never too heavy for whatever machine is running it. The frame rate remains the same - a minimum of 25 FPS - it might just take it more time to actually show every interactive object. If you implement a ZUI like Google Maps (or Gigantt) you can naturally allow "farther away" elements to render in the background, so to speak. 

This isn't ideal. An ideal solution would be something that's able to render as many items as can be discerned on screen, so that if you zoom in you get a continuous impression of the items already "being there". But that's an extraordinarily large number of polygons to render (in the trillions, really). Our approach does the best with what we have by providing a seamless zoom-in effect that takes advantage of the fact that it takes the eye a few hundred milliseconds to actually see what's being rendered on screen before the user can make a conscious decision where next to zoom-in to. This optimization is neat trick that we thought we'd share with the world. You can expect to enjoy it in the next release of Gigantt in a few weeks.

Sunday, March 6, 2011

The Landing Page Has Landed

After a long weekend of web design and HTML wrestling, finally Gigantt has a public face (aside from this blog):

There's a nice little video there demonstrating the basic features.

Tell your friend.
Join the beta.
Be the first kid in your class to manage complex work-plans with thousands of tasks.

Wednesday, February 23, 2011

Reundo & Redo

Implementing undo is a tricky thing. This release of Gigantt contains a complete rewrite of the undo mechanism and consequently adds support for redo (the opposite of undo).

Undo & redo are very similar to back & forward in a browser. Changes to a Gigantt plan can be thought of as navigating backwards in the plan's history. And of course you can only go back to the future unless you changed something. Browsers work the same way: if you navigate back and then click a link, for example, you can't go forward any more.

Until now undo was implemented in a rather naive way. The entire plan was simply saved as a copy whenever the user changed anything in it. This worked well, but was obviously inefficient. The right thing to do, presumably, is to just save the "diff" (whatever changed). This optimization also makes saving the plan much faster, because it's easy to just send the diff to the server than to upload the entire plan every few seconds.

This little bit of redesign and refactoring doesn't add a whole lot of functionality to the system - just the redo action. But it prepares the ground for a major new feature that's going to arrive in the next few months: collaboration. Being able to support multiple users editing the same plan at the same time, and also allowing each user to undo his actions in case he made mistakes, is a major feature of Gigantt. And the first thing you need to do to implement such a feature is to know how to manage the history of a plan by keeping track of the "diffs" each user makes. I'll write more on how collaboration is going to be supported in Gigantt in future posts. 

One more thing. URLs in the notes panel are now highlighted as links. You can Ctrl-Click on them to navigate (in a new window/tab). So if you have a work item for fixing a certain bug, you can add a link to its corresponding page in a issue-tracking system, for example.

This is release #10 of Gigantt. We're moving to a quicker release cycle where each release has one major feature and usually plenty of bug fixes. Next release will bring the ability to set an explicit start-date to items. This makes it possible to add "receivables" to the plan - items that may have very short duration but can only start on a certain date in the future. More on that in an upcoming post. Enjoy 0.10!

Sunday, February 13, 2011

How to build the Flex SDK

Gigantt's UI is built with Adobe Flex 4.1. It's a great web development SDK, but Adobe is notoriously slow to respond to bug reports. So sometimes we find ourselves having to go into the Flex SDK's source code in order to debug stuff or work around bugs. In order to fix a bug in mx:DateField today I had to download and build the SDK (which is open source) from Adobe.
It's not hard, but I thought I'd share the how-to with the world:

How to build the Flex SDK on Windows
(based on Windows 7 64-bit)

  • Java - You'll need the JDK, not just the JRE (the runtime which comes with Flash Builder, for example).
    • I used the 64 bit version:
      "c:\Program Files\Java\jdk1.6.0_23" 
  • Ant - You can download the binaries:
    • I used 1.8.2
    • Just unzip it to c:\dev\ant
  • Flex open-source SDK
    • I used
    • Unzip it to c:\dev\sdk (just for simplcity)
  1. Let's create a batch file to set some useful envars: envars.bat
    set JAVA_HOME=c:\Program Files\Java\jdk1.6.0_23
    set PATH=c:\dev\ant\bin;%PATH%
    set ANT_OPTS=-Xmx256m
    1. Open cmd.exe and run it...
  2. Edit c:\dev\sdk\frameworks\build.xml 
    1. Look for:
      <target name="datavisualization" description="Builds datavisualization.swc">
    2. And fix the location of the manifest file from:
      "${datavis.dir}/manifest.xml" to:
  3. Run Ant:c:\dev\sdk\frameworks> ant
    1. It should end with such a message: BUILD SUCCESSFUL
  4. Now let's tell Flash Builder where to find this new SDK: c:\dev\sdk 
    1. Add it to the "Installed SDKs" settings in Flash Builder
    2. Make sure your project is configured to use this SDK (it was probably created with the original one and still refers to it).
  5. Rebuild your project. It should work.