In computer science, there are problems that taunt and mock you: the brain mutilating vortex of a difficult recurrence, a sprawling parallel algorithm, anything that almost but doesn’t quite work out, generally. The salt in the wound that is a nuked brain is often that the problem is sub-optimally easy to solve. That is, an answer exists right there in front of you and it will quickly and simply return a correct answer—but not the most correct answer. Another algorithm can do better, and that algorithm is a hellscape.
By Michael Byrne|MOTHERBOARD
To see this, and to see why it’s interesting and highly relevant, we need to understand one of the foundational problems of computer science: the knapsack problem. It’s easy enough to explain and the odds are pretty good that you’ve encountered it in everyday life. It goes like this. You’re given some container (a knapsack, but really whatever) and it has a certain capacity. The goal is to fill the knapsack with a set of objects with the greatest total value while still adhering to the capacity limits of the knapsack.
How can you put the most valuable set of objects into the pack without it being too heavy? It’s easy enough to sort out something like the situation below just by thinking through it, but the problem very quickly becomes complex as more items are added and as the container’s capacity increases. In fact, it becomes impossibly complex—hopeless. The problem is what’s known as NP-hard, which means that it is not the case than an algorithm exists that can solve it both correctly and quickly on all possible cases.