Group I P vs NP: a Clay Millennium Problem with Economic Flavor
The Problem Statement
P = NP?
P vs. NP is perhaps one of the biggest open problems in computer science and mathematics but what is P vs NP?. The following question of interest to people working with computer science and mathematics in: Can every solved problem whose answer can be checked quickly by a computer also be quickly solved by a computer? P and NP could be defined as the two types of maths problems referred to: P problems are fast for computers to solve, and so are considered "easy". NP problems are fast (and so "easy") for a computer to check, but are not necessarily easy to solve. But so far, no one’s been able to decisively answer the question one way or the other. Frequently called the most important outstanding question in theoretical computer science, the equivalency of P and NP is one of the seven problems that the Clay Mathematics Institute will give you a million dollars for proving — or disproving.
Background
In 1956, Kurt Gödel wrote a letter to John von Neumann. In this letter, Gödel asked whether a certain NP complete problem could be solved in quadratic or linear time. And in 1971, Stephen Cook introduced the precise statement of the P vs. NP problem in the article called “The complexity of theorem proving procedures”.
Importance
There are lots of P and NP problems in our daily applications, especially the NP problems, which means that people don’t obtain a idea for how to solve in a method that is faster than testing every possible answer. Most people instead of waiting for this solution choose to approximate the solution instead. They get the solution quickly but with some error involved. Most problems relating to the maximum value possible lie in NP and thus are not always worth the time to compute. This means that people are generally not using their resources to receive the largest possible value.
Grasping the problem
We start by looking at NP-complete problems. The NP-complete set of problems comprise of a set of problems, to each of which any other NP-problem can both be reduced to be solved and verified in polynomial time.
Next, we look at NP-hard problems. These problems comprise at set of problems that are at the minimum as difficult to solve as NP problems. However, these problems need not be verifiable in polynomial time. Looking at the definitions however may not give us an obvious understanding of how NP complete problems exists.
In order to tackle or even grasp the problem, we need to start looking at a bigger picture. All these set of problems problems appear to be separate from each other until can be seen from a different perspective and reduced to each other hence making a family of interconnected problems all unifiable and probably solvable by a common principle/technique. Any hint or step towards the solution of this problem will have huge implications in the field of cryptography. This is because, cryptography relies on a certain problem being difficult to solve. Given a solution to the P vs NP problem would render most of the currently prevalent cryptography techniques very vulnerable.
Existing Theories
As said earlier, P problems are ones that can be solved quickly or in polynomial time. This means that there are generally better methods than brute force to solve this problem or that the brute force method does not take exponential time. NP problems are all problems whose solution can be checked in polynomial time. The classification of NP does entail all P problems. However, NP problems are not necessarily solved quickly. There is a subset of problems in NP called NP-Complete. These are the hardest problems that still lie in NP. They require at least exponential time to solve. By virtue of being the hardest, if these were shown to have quick solutions, it would imply that all problems in NP have a quick solution. So NP-completeness can be thought of as a way of making the big P=NP question equivalent to smaller questions about the hardness of individual problems. NP-Complete problems are all related as well; by manipulating the input structures, every problem in NP-Complete can be solved using any other NP-Complete solution method.
So if we believe that P and NP are unequal, and we prove that some problem is NP-complete, we should believe that it doesn't have a fast algorithm. For unknown reasons, most problems we've looked at in NP turn out either to be in P or NP-complete. So the theory of NP-completeness turns out to be a good way of showing that a problem is likely to be hard, because it applies to a lot of problems. But there are problems that are in NP, not known to be in P, and not likely to be NP-complete. We should ask ourselves, why should we care ? These NP-complete problems really come up all the time. Knowing they are hard, allows you to make the choice of whether it is worth the time to solve exactly or would it be better to solve the problem approximately and quickly. It is possible to come up with a provably fast algorithm , that does not solve the problem correctly but comes up with a solution that you can prove is close to right.
Examples in P vs. NP Problem
The following examples describe problems that currently involve taking exponential time to find a solution exactly or using an approximation of the solution to solve quickly. If P=NP, these problems could then be solved both quickly and exactly.
- Long simple paths
A simple path in a graph is just one without any repeated edges or vertices and to define the problem of finding the longest path in terms of complexity theory, we need to formalize it as yes or no question: given a graph G, vertices s and t and number k , does there exist a simple path from s to t with at least k edges? A solution to this problem would then consist of such a path . If you are given a path, you can quickly look at it and add up the length, double checking that it really is a path with length at least k. This can all be done in linear time, so certainly it can be done in polynomial time. However we don't know whether this problem is in P. In fact this problem is NP-complete, so we believe that no such algorithm exists other than using brute force.
- Chess
What is involved in chess programming? Essentially the sequences of moves possible form a tree: the first player has a choice of 20 different moves( most of which are not very good), after each of which the second player has a choice of many responses, and so on. Chess playing programs work by traversing this tree, and finding what the possible consequences would be of each different move. The tree of moves is not very deep, as a typical chess game might last 40 moves, and it is rare for one to reach 200 moves. Since each move involves a step by each player, there are at most 400 positions involved in most games. If we traversed the tree of chess positions only to that depth, we would only need enough memory to store the 400 positions on a single path at a time. This much memory is available even on the smallest computers you are likely to use. So perfect chess playing is a problem in PSPACE. (Actually one must be more careful in definitions. There is only a finite number of positions in chess, so in principle you could write down the solution in constant time. But that constant would be very large. Generalized versions of chess on larger boards are in PSPACE.) The reason this deep game-tree search method can't be used in practice is that the tree of moves branches a lot, so that even though it is not deep it has an enormous number of vertices. We won't run out of space if we try to traverse it, but we will run out of time before we get even a small fraction of the way through. Some pruning methods, notably "alpha-beta search" can help reduce the portion of the tree that needs to be examined, but not enough to solve this difficulty. For this reason, actual chess programs instead only search a much smaller depth (such as up to 7 moves), at which point they don't have enough information to evaluate the true consequences of the moves and are forced to guess by using heuristic "evaluation functions" that measure simple quantities such as the total number of pieces left.
- Knapsack Problem
One of the easier NP-Complete problems to describe is the knapsack problem. You are given n objects each with different weights and values and you have a knapsack that can only hold so much weight. You want to put objects into the knapsack such that you maximize the value of the knapsack. The only way we currently know how to solve this is try nearly every combination of the items in order to see which set has the most value. In other words, we need to make nearly every calculation possible in order to find the answer. This solution takes time exponential to the number of items to solve. Meanwhile if given the items that would add up to the most value while still fitting in the knapsack, one could quickly see that replacing any item would either add more weight than the knapsack could handle or reduce the value of the knapsack. This can be done just by looking at each item once, making the solution rather quick to find. If P=NP is true however, this would indicate that there is some solution such that the knapsack problem could be created in less than exponential time.
The Future of N vs. NP Problem
- Bulleted list item
The Status of the P vs. NP Problem An approach to the P versus NP problem through algebraic geometry, dubbed Geometric Complexity Theory, has been presented by Ketan Mulmuley and Milind Sohoni. This approach helps avoid the difficulties that we are currently having, but requires deep mathematics that could require many years or decades to carry through. Mulmuley and Sohoni have reduced a question about the nonexistence of polynomial-time algorithms for all NP-complete problems to a question about the existence of a polynomial-time algorithm (with certain properties) for a specific problem. This should give us some hope. However, Mulmuley believes it will take about 100 years to carry out this program, if it works at all. The P versus NP problem continues to inspire and boggle the mind and continued exploration of this problem will lead us to yet even new complexities in that truly mysterious process we call computation.
- The Possible Use
Proving P = NP would then show that many problems could be calculated quickly. Many problems that lie in NP are problems of maximization such as the knapsack problem. Currently, we must either approximate and get an answer quickly or spend a lot of time to find the answer exactly. Most businesses choose to use an approximation as they can’t afford to wait for the exact solution. If P = NP was true, then these businesses could then have the best of both worlds and have an exact solution quickly. This would give them more knowledge on how to best use their resources and hopefully make them more efficient. This would then reduce the amount of money wasted and free that money up to produce more products or conduct more research. Overall, it would have the potential to increase the speed at which a company grows, reduce unnecessary spending, and increase production.
Reference
1)* Cook, Stephen (1971). "The complexity of theorem proving procedures". Proceedings of the Third Annual ACM Symposium on Theory of Computing. pp. 151–158. 2)* Hambrusch, Susanne (2015). CS 381 Lecture - NP and P. Retrieved from notes taken in lecture. 3)* Fortnow, Lance (2009). "The Status of the P Versus NP Problem". Communications of the ACM, Vol. 52 No. 9, Pages 78-86.