In an age of global cloud and software-defined hyperscale networks, together with big data, automated and distributed digital enterprise built on top of the Internet of Things (IoT), operation managers struggle to put their heads around the interdependent dynamics of these labyrinths in 2D.
They crave a heuristic view of intersecting systems, which helps them to see relationships in the data, root causes of problems, the decision trade-offs, and the impact of remedies for their problems.
What they don't want is to have their view obscured by an impenetrable fog of amorphous data, plus they want to control live operations to streamline them.
Currently, IT decision-makers make-do with textual reports, interspersed with rudimentary charts and graphs drawn with spreadsheets, to gain situational awareness of their operations and to weigh decision options for optimization.
Beyond Artificial Intelligence
Artificial intelligence (AI) is meant to comb through reams of data and provide pointers to solutions. In most cases, algorithms are black boxes that crunch data and derive coefficients of mathematical functions for quantifying the impacts of performance variables, which is not intuitive in the ambiguous world of decision-makers who need to see visuals of live reality to gain insights.
Interactive 3D graphical display of information addresses the need for presentation of critical parameters of performance, sifted from a mass of data with help from machine learning (ML), for decision-makers to explore, simulate and predict, on a single virtual interface, to arrive at conclusions.
Mixed reality, in combination with virtual reality and augmented reality, and interactive natural user interaction like hand gestures afford the flexibility for users to view the data from multiple angles to help them assimilate the information quickly.
Existing studies have confirmed that task performance effectiveness rises by more than 50% with a 3D stereoscopic view of data that enhances perceptions of spatial dimensions. Gesture interaction aids fluidity of manipulation of the images to be able to probe from multiple directions.
Wearable devices support rapid interplay with graphics as they have more degrees of freedom to view images from several vantage points. The graphics are also vivid due to the proximity of wearables.
However, users of wearable devices continue to experience a blur as their field of view changes, which is detrimental to task performance.
Uses of Immersive Graphical Display
Immersive interaction with graphics is finding use cases where the complexity is mind-boggling, and existing methods of visualization are unable to convey the multiple dimensions of reality. “We have found that the primary use cases for 3D interactive graphics are around networks, supply chains, and cybersecurity. Network visuals, a key-value of the graphics, display the connections of the nodes juxtaposed with data about their relationships and traffic flows,” said Tyler Cummings, co-founder and COO of 3Data.
Today, it is possible to reconfigure the graphics to blow up some of the specific aspects that interest individuals.
“We have developed an AI-assisted routine that allows users to choose a target of interest and the "algorithm" recreates the visuals centered around it, draws attention to noteworthy elements in plain English, and pinpoints the most significant variables for the problem at hand,” said Ciro Donalek, CTO and co-founder of Virtualitics, Inc, which has deployed the solution for several Fortune 500 companies.
“People who are not data scientists can take advantage of complex algorithms with natural interactions like hand gestures to see graphics as they choose,” Donalek added.
3Data creates a virtual platform with web technologies that allow geographically dispersed teams, regardless of their data source or visualization tool or device for communication, to be part of the same virtual meeting room to collaborate and make decisions.
“It would take multiple monitors to display the same information on screens,” Cummings explained.
“Each team can invoke an API to bring a data source into the platform and create a graph on a dashboard,” said Cummings. He demonstrated a network operations center that 3Data created for Cisco with a network diagram, displayed as a hive of network nodes and connections, in the foreground, and dashboards with data charts in the background.
“For cybersecurity monitoring, we use the IBM Watson platform for machine learning to look for anomalies in the data flows to uncover attacks, such as DDoS (Distributed Denial of Service), and are illustrated by the disparate size of individual network nodes,” said Cummings.
The databases can be queried with voice commands to show, for example, the volume of data flowing through individual ports. “Hand gestures on the network diagram can shut down congested physical ports, and the graph will show the subsequent impact on overall traffic in the network,” said Cummings.
Map data augment supply chain diagrams that show the links between suppliers and their locations. “In emergencies such as a hurricane, decision-makers can examine what-if scenarios such as changes in the sourcing of products and see, on the virtual diagram, their impacts on the expected time-of-arrival,” said Cummings.
The maturity of wearable devices
Wearable devices have been evolving lately ; new mixed reality devices from Magic Software and Hololens 2.0, recently launched, that add features for natural interaction in 3D environments and collaboration across geographies.
Industries like construction, which have lagged in productivity growth, expect to greatly benefit from mixed reality wearables that will help them to collaborate. Bentley and Trimble, two of the majors in construction, are using mixed reality to ensure that the progress in construction accords with architectural designs.
Currently, wearable devices play a limited role in specific circumstances with a few clunky options for natural user interaction with controllers.
“Immersive graphical representation by wearable devices is most useful in the context of geospatial data, such as airports, to see what the naked eye cannot, albeit without cluttering it with data that is not easily digested by the audience,” said Andrea Bravo, Ph.D. Fellow at Engineering Systems Group, Technical University of Denmark, based on her research.
“The fidelity of the data and text juxtaposed on immersive graphics, their clarity, and speed of refreshing them needs to advance before they are widely used,” said Wallon Walusayi, CEO and co-founder of 3Data.
To be sure, Bravo informed us that she has not completed the exploration with advanced devices such as Hololens 2.0, which has improved its field of view and natural interaction with hand gestures and eye gazes.
What We've Learned
Technologies for immersive graphical displays of information are rising from the trough of the hype cycle. New products are emerging to reignite the interest of customers. Incorporation of mixed reality and natural user interaction will considerably increase the value of these products for decision-makers.
The launch of new wearable devices will expand the scope for collaboration and decision-making with, as yet, unforeseen outcomes.