Experts Share the Top Big Data Trends for 2017

Why Trust Techopedia
KEY TAKEAWAYS

Many experts believe that 2017 will be even bigger as big data technology becomes increasingly sophisticated and organizations continue to hone their ability to leverage big data.

2016 was a landmark year for big data. According to data collected by Tableau, more organizations than ever before stored, processed and analyzed big data as part of their business processes. Many experts believe that 2017 will be even bigger as big data technology becomes increasingly sophisticated and organizations continue to hone their ability to leverage big data. We asked experts in the field to deliver their predictions about what the year holds when it comes to big data technology. Here’s what they told us.

Increased Automation Within Big Data Tools

Businesses invested in big data need to know more dimensions of their customers, products and operations. New tools for data dashboards and reporting automation are positioned to turn long- and short-tail insights into revenue and deliver to the bottom line. Automation and the cost of offshore data science expertise will drive down costs to adopt business-centric insight tools and help poised clients to personalize and sell greater products/services to their customers new and loyal.

Michael Reddy, Founder & Chief Analytics Officer at Digital Acumen

Increased Focus on Data Cleanliness

Despite increasingly powerful machine learning and advanced algorithms, many marketers just haven’t actually properly collected their data, normalized it, cleansed it, structured it, and got it into a place where it can be analyzed. 2017 will be a year where there is a bigger focus on these “data janitoring” tasks.

Mike Driscoll, CEO at Metamarkets

Early Adopters Will Begin Using a Single Customer Experience Platform

Early adopters will begin using a single customer experience platform to mine data accrued from all points of engagement. This type of system will include self-service analytics, mobile analytics, and big data analytics. Analytics do provide the insights that brands are looking for, but it’s important to view contact center analytics from a customer, agent, and organizational perspective. Other solutions attempting to manage customer experiences cannot work beyond a single interaction. They’re either trapped in a function (sales, marketing, service) or a channel (voice, mobile, digital, social) or, worse, both: in a channel in a function. These silos are where accountability goes to die. In an era when consumers expect a seamless digital experience, it only takes one interruption to lose customer loyalty.

Advertisements

Merijn te Booij, CMO at Genesys

SaaS and Big Data Will Become Mainstream for IT Operations Analytics

In 2017, expect to see the combination of SaaS and big data move into the mainstream for real-time, IT operations analytics solutions.

Big data was born as open source software. While hugely powerful, in most cases it was not easily digested by the bulk of IT shops. Commercialization of big data arose via consulting-driven businesses that offered integration and support around these open source tools. This was effective, but expensive. In parallel, cloud providers started offering big data tools bundled with cloud infrastructure resources. These evolutionary phases set the stage for a natural evolution from tools and generic platforms to SaaS big data offerings built around real-world use cases.

Jim Frey, Vice President of Strategic Alliances at Kentik

Democratization of Big Data Will Accelerate & Level the Playing Field for Smaller Companies

There will be an increased focus on technologies and services that put the power of data into the hands of those who need it most. Sales and marketing, for example, will have more options to monitor, sanitize and analyze big data “at the edge” before it is dumped into large centralized databases where it could lose value quickly. Companies will increasingly look to providers that alleviate the burden of all of this data management, while empowering more effective sales and marketing by delivering good data into the hands of sales reps and marketing leads so they can take more informed and immediate action. This will be especially useful for smaller companies that will leverage democratized data to compete against larger rivals.

Henry Schuck, co-founder and CEO of DiscoverOrg

Companies and Technologies That Create and Manage Big Data Will Face Bigger Expectations

Companies are realizing that verified data is the most critical factor driving successful sales and marketing. As more technology is applied to the sales and marketing function, good data becomes increasingly important, because it is the fuel that powers those tools. The immense value Microsoft placed on LinkedIn because of its rich data, and the investment Salesforce has made on tools to leverage data to inform the “customer journey,” are major signals that foretell future market alliances, consolidation and innovation that will be based primarily on the value of good data.

Henry Schuck, co-founder and CEO of DiscoverOrg

Data Sovereignty and Security Will Drive Discussions at World Forums

One of the biggest questions surrounding big data in 2017 will be: “Who actually owns it?” Data sovereignty and security – both at the corporate and individual levels – will drive discussions about this topic at many prominent forums around the globe (such as The World Economic Forum in Davos and G8).

As we move into an era of machine learning, artificial intelligence (AI) and virtual reality, the data produced by any one piece of technology belongs to the technology’s “owner / creator.” However, with countries like Europe implementing frameworks such as the General Data Protection Regulation (GDPR) in 2018, which will encompass dramatically larger fines for violations of data protection laws (up to 4% of a company’s global revenue in some instances), so does the financial responsibility for non-compliance.

Now that data sovereignty negligence will place a direct hit on the deep pockets of corporations, my prediction is that it will garner a lot more attention in 2017.

Garry Connolly, Founder & President of Host in Ireland

AI and Analytics Vendor M&A Activity Will Accelerate

There’s no doubt that there’s a massive land grab for anything AI, machine learning or deep learning. Major players as diverse as Google, Apple, Salesforce and Microsoft to AOL, Twitter and Amazon drove the acquisition trend this year. Due to the short operating history of most of the startups being acquired, these moves are as much about acquiring the limited number of AI experts on the planet as the value of what each company has produced to date. The battle for AI enterprise mindshare has clearly been drawn between IBM Watson, Salesforce Einstein, and Oracle’s Adaptive Intelligent Applications. What’s well understood is that AI needs a consistent foundation of reliable data upon which to operate. With a limited number of startups offering these integrated capabilities, the quest for relevant insights and ultimately recommended actions that can help with predictive and more efficient forecasting and decision-making will lead to even more aggressive M&A activity in 2017.

Ramon Chen, Chief Marketing Officer at Reltio

Data Lakes Will Finally Become Useful

Many companies who took the data lake plunge in the early days have spent a significant amount of money not only buying into the promise of low cost storage and process, but a plethora of services in order to aggregate and make available significant pools of big data to be correlated and uncovered for better insights. The challenge has been finding skilled data scientists who are able to make sense of the information, while also guaranteeing the reliability of data upon which data is being aligned and correlated to (although noted expert Tom Davenport recently claimed it’s a myth that data scientists are hard to find). Data lakes have also fallen short in providing input into and receiving real-time updates from operational applications. Fortunately, the gap is narrowing between what has traditionally been the discipline and set of technologies known as master data management (MDM), and the world of operational applications, analytical data warehouses and data lakes. With existing big data projects recognizing the need for a reliable data foundation, and new projects being combined into a holistic data management strategy, data lakes may finally fulfill their promise in 2017.

Ramon Chen, Chief Marketing Officer at Reltio

Moore’s Law Will Hold True for Databases

Per Moore’s law, CPUs are always getting faster and cheaper. Of late, databases have been following the same pattern.

In 2013, Amazon changed the game when they introduced Redshift, a massively parallel processing database that allowed companies to store and analyze all their data for a reasonable price. Since then, however, companies who saw products like Redshift as datastores with effectively limitless capacity have hit a wall. They have hundreds of terabytes or even petabytes of data and are stuck between paying more for the speed they had become accustomed to, or waiting five minutes for a query to return.

Enter (or reenter) Moore’s law. Redshift has become the industry standard for cloud MPP databases, and we don’t see that changing anytime soon. With that said, our prediction for 2017 is that on-demand MPP databases like Google BigQuery and Snowflake will see a huge uptick in popularity. On-demand databases charge pennies for storage, allowing companies to store data without worrying about cost. When users want to run queries or pull data, they spin up the hardware they need and get the job done in seconds. They’re fast, scaleable, and we expect to see a lot of companies using them in 2017.

Lloyd Tabb, Founder, Chairman & Chief Technology Officer at Looker

SQL Will Have Another Extraordinary Year

SQL has been around for decades, but from the late-1990s to mid-2000s, it went out of style as people started exploring NoSQL and Hadoop alternatives. SQL, however, has come back with a vengeance. The renaissance of SQL has been beautiful to behold and I don’t even think it’s near its peak yet.

Lloyd Tabb, Founder, Chairman & Chief Technology Officer at Looker

IT Teams Will Direct More Focus Toward Putting Big Data to Use

In 2017, IT teams will look beyond solving for big data and in turn focus more of their attention on putting big data to use as the next step. Machine learning will be used as a source of wide intelligence and insights that was not humanly possible before. Combined with customer feedback, IT teams will utilize insights gathered from machine learning to predict and personalize customer experiences.

Rajagopal Chandramohan, Chief Architect, Enterprise Business Services at Intuit

More Companies Will Use Big Data Analytics to Detect (Not Just Prevent) Fraud

Many IT managers are unaware that enterprise resource planning (ERP) systems, which house and manage the company’s big data sets have inherent complexities that can actually create opportunities for fraud. Considering the expense of overhauling such systems, companies are beginning to focus more on fraud detection, installing data analysis tools to double-check the ERP’s ability, with the intention of catching anomalies that may indicate fraud.

While adding ERP controls may prevent additional fraud, it’s costly and often chokes process efficiency while opening the door to circumvention by determined fraudsters. By shifting their focus, companies can gain the ability to analyze trends in ERP data and detect where someone has erred or attempted to bypass controls, rather than setting up endless roadblocks.

Dan Zitting, chief product officer at ACL

2017 Will Be a Year of Optimization for Organizations With Cloud-Based Data Stores

For organizations with cloud-based data stores, 2017 will be a year of optimization. For those seeking to move data to the cloud, 2017 will be a year to incorporate data optimization strategies. All roads lead to eliminating unnecessary operational costs while fostering business performance with access to insights and facts. Stories of data boomerangs – data moved to the cloud and then moved back due to unexpected costs – can be eliminated by following a zoned data strategy. This entails supporting data architectures with the premise that all data is not equal to the organization. Data architects must consider data value based on organizational need. Alignment can be achieved with data zones. Common examples would include: lights on support, competitive advantage, and innovation and refinement. Gone are the days when data lakes can be viewed as a simple and undifferentiated refuge for all data. Get in the zone, the data zone.

William Hurley, senior director of software lifecycle services at Astadia

Advertisements

Related Reading

Advertisements
Techopedia Staff
Editorial Team
Techopedia Staff
Editorial Team

At Techopedia, we aim to provide insight and inspiration to IT professionals, technology decision-makers and anyone else who is proud to be called a geek. From defining complex tech jargon in our dictionary, to exploring the latest trend in our articles or providing in-depth coverage of a topic in our tutorials, our goal is to help you better understand technology - and, we hope, make better decisions as a result.