The philosophy of big data has to do with a very targeted and micromanaged use of large pools of information. For example, a company that has thousands and thousands of customers will undertake a big data project to harness all of the information it has about those customers -their names, where they live, what they have bought before, etc. However, the results have more to do with setting up specific structures for data manipulation and reporting than they do with just collecting and "running" these massive data sets.
Part of the challenge of big data is that it requires more specialized hardware processes. Companies often use open-source systems like Apache Hadoop, and specific related tools like mapReduce to get big data solutions in play. This takes additional technical know-how beyond just setting up a Microsoft Access table or pursuing some other simpler database technology.
To make big data effective, companies have to look at implementation and how to avoid disrupting their normal business activities. To make it most efficient, they have to look at exactly which sets of data are going to be most useful to them. For example, if salespeople or others can do what they need to do with a simple report of just last names, states and telephone numbers, it doesn't make sense to try to run more extensive data through the system and try to collect and present other identifiers or key pieces of information.
Effectiveness, easy implementation and cost drive the emergence of company-specific big data solutions. These innovations are definitely dependent on a particular business model, and on the problems that have to be solved.
Being digital should be of more interest than being electronic.- Alan Turing, 1947