Traditionally the use-case for Hadoop is when you need to split your data storage over dozens or more of machines, and you're not using a traditional RDBMS solution. When you only have one machine, you're likely to negate any potential gains the use of Hadoop would have provided.
Additionally, 20 columns * 5 million rows is considered by most DBA to be a small database, and aside from index lookups is not worth much in the way of optimization because most DBMS would handle this amount of information quite quickly.
Back to the topic of Hadoop, however, is this: Hadoop is a distributed file system, not an outright database. A potential use (and one I know fairly well) of Hadoop is when you have large sets of binary files, which have a common data format, and you need to run the same operations on each binary file, or you need to find those binary files quickly. In this case, Hadoop is effectively a massive lookup engine for all the files on the DFS. This way you can quickly find the files that you need to work with to run the parallel data analysis. One such group using Hadoop for such a goal is CERN.
I would not encourage you to consider transitioning your data to Hadoop when a traditional RDBMS would work well for your needs.