What would you do if you were asked to plan the construction of a building? Construction projects are typically big, involving many people and resources. They're also time-sensitive - you have exactly one week to build each floor, and the material for the next one needs to arrive just in time. Cement won't wait.
Since the 50's various methodologies have been developed to tackle the issue of creating a work-plan that result in a smooth-running project, with as few costly delays as possible and without wasting precious resources. Most notable in the construction industry is the Critical Path method. But while CPM may help you create a logically coherent plan that finishes on time, it doesn't really guarantee you that your plan is optimal in any way. What if the whole thing could be finished two months earlier? What if it could cost a lot less money simply by better utilizing the same resources? These are big projects with big money that needs to be spent correctly.
This brings us to the concept of resource leveling. It's easiest to describe what resource leveling is by giving examples of projects running without it. For example:
- Workers are standing around idly because it's not their turn to use some machine.
- Lots of people are hired at the beginning of the project and then need to be fired when they're not needed.
It's all about making the best use of limited resources. Ideally, you'd want to hire a fixed number of people and employ them fully for the entire duration of the project. When that's the case your project is optimally planned for the amount of resources it may expend.
To better illustrate what resource leveling “looks like” resource histograms are often used. It's just a graph with two axes: time and resources. The horizontal axis is usually work days and the vertical axis is the amount of resources used (e.g. man hours).
Here's a terrible one where it's easy to see that resources aren't used evenly throughout the project.
Resource leveling is all about getting that histogram to have a rectangular shape. If all of your workers are constantly employed then there's not much to improve - you have a perfectly leveled plan.
It is thus the process of moving tasks around, while still maintaining their logical order of dependence, in a way that maximizes the use of your available resources. Resource leveling algorithms differ mainly in the way in which you choose which task to move where.
Turns out it's a hard problem. NP-hard, to be exact. In other words, you can't simply try all feasible task arrangements until you find the optimal one. Unless of course your project is tiny, trying to exhaust every possible solution to the problem generally requires an exponential running time, and your deadline is probably before the next ice-age, so no luck there.
The problem is known as the resource constrained scheduling problem (RCSP). In a way it is a generalization of the traveling salesman problem (TSP). There are tons of very clever algorithms that try to solve these problems by approximating the optimal solution. They differ in their running time and how close they get to an optimal solution. Some of them are pretty neat. Let's review them.
Heuristic Algorithms
This family of algorithms tries to use rules of thumb to make decisions. Remember, the decision a RCSP algorithm has to make is which job to shift in time in order to evenly distribute resources. Not every job can be moved. Some have to happen at a very specific time and have many successive jobs depending on their completion. But there's always some degree of freedom (or float) in the plan, and that's the space in which these algorithm mostly operate.
An example of a heuristic is the following rule: choose the job with the most float from all the jobs in the last sequence step of the plan (i.e. it's final stages), and move it to a location where it has the biggest positive impact on the plan's “levelness”. Do this repeatedly, constantly measuring how leveled the resource histogram is at every step, until you can improve no more.
It's a relatively simple heuristic - too simple, really. Think about it; why should it find the optimal solution? It scans the work plan from end to beginning, greedily shifting jobs as much as it can. Why start at the end? Why not shift the smallest job instead of the one with the most float? It might very well make some unfortunate early decisions that would prevent it from reaching the optimal solution.
Most heuristic algorithms are considerably cleverer than the one above, but they've long been considered insufficient since it's not hard to find negative examples where they produce really poor results.
Metaheuristic Algorithms
Instead of following a rule of thumb when searching the solution space, metaheuristics often try to take cues from various natural processes in trying to figure out their search strategy. How do ants “know” the shortest path from their colony to food? How do organisms “know” how to adapt to their environment? Let's see how the answers to these questions can reveal interesting solutions to RCSP.
Genetic Algorithms
Which job shall we shift next and where do we shift it to? Deciding on this question over and over forms a path through the solution space. Our goal is to find the global optima – the place in which the histogram is rectangular (i.e. has a minimal moment, or in other words the least amount of fluctuations in resources). If we measure how leveled a plan is by some function F (for Fluctuations), we can describe our effort as trying to find the lowest point in the solution space, where F reaches a global minimum. The solution space usually has many dimensions, but for the sake of illustration let's think of it as a 3D terrain. Your goal is to reach the deepest valley, but you can't look ahead more than one step. How do you decide which way to go? Where do you even start?
Instead of trying to solve the problem directly (as the heuristic algorithm above) metaheuristic algorithms are good at taking an existing solution and improving it iteratively. Genetic algorithms belong to this family. They start out by creating a few solutions somewhat at random. Each solution is encoded as a “DNA” sequence. For example, if the path a solution takes is “left, right, straight, straight” then its DNA can be encoded as “LRSS”. Each solution's fitness (how well it levels the histogram) is measured and the best ones are mated with each other to produce offsprings that share half of each parent's DNA at random. Then the process is repeated with the next generation. By adding randomness in the form of mutations we can prevent the algorithm from fixating on a sub-optimal path for too long. Mutations drive the algorithm to explore new paths, and getting rid of the unfit drives the algorithm to improve the promising paths.
A major challenge of such metaheuristics is, indeed, not getting stuck in local optima. A cool way to overcome this challenge is called simulated annealing.
Simulated Annealing
Imagine we're trying to find the deepest valley in the terrain pictured above by scattering a few popcorn kernels from above and letting them fall into the valleys. You want at least one of these kernels to reach the deepest valley, but how do you prevent them from getting stuck in a local valley? You turn on the “temperature” of the terrain until they start to pop and jump around slightly, possibly jumping right out of the local valley they might be in. You then turn the temperature back down and let them continue falling. This is similar to how simulated annealing works. Just like a metallurgist heats up a metal and cools it over and over to make it stronger, the simulated annealing approach to RCSP allows the algorithm to explore the solution space by converging on promising paths (cold) but also jumping around to explore nearby paths every once in a while (hot).
Ant Colony Optimization
Ants are smarter than popcorn. Collectively, that is. Individually, not so much. Let's see how ants would find the deepest valley. Ants seek food pretty much at random, at first. They don't have a fantastic sense of smell like dogs, or amazing vision. But once they find food they somehow converge pretty quickly onto the shortest path from their colony to it. It doesn't take them long to form an orderly single file that shoots directly to the food and back. They do this by secreting pheromones.
Each ant secrets a pheromone that causes other ants to want follow it. The more pheromone an ant picks up, the more likely it is to go in its direction. If an ant finds food and travels back and forth on the same path, it secrets more and more pheromone along this path, making it more and more appealing to other ants. And the more ants follow the path it becomes even stronger. It's a feedback loop that makes the ants converge on a path to food. But that's only part of the story. The pheromone also evaporates over time. So what happens if two ants find two separate paths to the same food, but one is longer than the other? The longer path takes more time to travel and as a result the pheromone has more time to evaporate, making it ultimately less attractive to the ants. So ants don't just find food and “tell” each other about it, they also iteratively find the best route to the food by choosing the one with the most pheromone on it.
We can do the same for the RCSP. Start out with a few feasible work plan arrangements and let simulated ants search for improvements. When an ant chooses a modification to the plan that results in a more rectangular histogram (an improvement) we deposit pheromones along the path to that solution. The bigger the improvement, the more pheromone we deposit. Pheromones here are just the probability of an ant following the same path. Ants will still wander off a bit, trying to explore nearby paths, but as the scent becomes stronger they will converge on a good solution. Pheromone evaporation will make sure they choose the shortest path.
There are many more approaches to solving such global optimization problems: artificial neural-networks, packing algorithms, monkey searches, tabu search, bee colony optimization (like ants, but with dancing instead of secreting).
What's neat about all these algorithms, aside from their awesome names, is that they don't require a real understanding of the particular problem at hand. They don't rely on intuition in deciding how to reschedule jobs. They just know how to improve an existing solution.