Within the idea of an overall big data ecosystem or industry, applications of big data strategies are specific to the needs of a particular business or organization. One of the biggest mistakes that executives and other professionals make is in taking a generic approach to big data, and trying to fit systems into a template that has been used before.
The philosophy of big data has to do with a very targeted and micromanaged use of large pools of information. For example, a company that has thousands and thousands of customers will undertake a big data project to harness all of the information it has about those customers – their names, where they live, what they have bought before, etc. However, the results have more to do with setting up specific structures for data manipulation and reporting than they do with just collecting and "running" these massive data sets.
Part of the challenge of big data is that it requires more specialized hardware processes. Companies often use open-source systems like Apache Hadoop, and specific related tools like MapReduce to get big data solutions in play. This takes additional technical know-how beyond just setting up a Microsoft Access table or pursuing some other simpler database technology.
To make big data effective, companies have to look at implementation and how to avoid disrupting their normal business activities. To make it most efficient, they have to look at exactly which sets of data are going to be most useful to them. For example, if salespeople or others can do what they need to do with a simple report of just last names, states and telephone numbers, it doesn't make sense to try to run more extensive data through the system and try to collect and present other identifiers or key pieces of information.
Effectiveness, easy implementation and cost drive the emergence of company-specific big data solutions. These innovations are definitely dependent on a particular business model, and on the problems that have to be solved.