On the face of it, deep stubborn networks simply "add functionality" to an existing technological construct, the generative adversarial network (GAN), but in reality, the recent evolution of the deep stubborn network tells us fundamental things about how AI can evolve toward significant modeling of human decision-making.
The deep stubborn network relies on the interplay within the GAN of two AI "entities": the "generator" and the "discriminator." The generator "generates" content or examples or test data or whatever you choose to call it. The discriminator takes the input and sorts it or makes decisions based on it. These two parts of a deep stubborn network are independent entities for the purposes of AI research, but they work together.
It's important to note that available public literature on deep stubborn networks is scant, seeming to consist of a small set of common descriptions in top Google ranking pages. One of the most authoritative, at KDNuggets, cites the use of a "Goodfellow coefficient" which is undiscoverable on its own through a Google search. (Ian Goodfellow is a computer scientist credited with some of the fundamental ideas behind deep stubborn networks.)
However, the idea of the deep stubborn network is explained at KDNuggets and elsewhere: the basic idea is that the generator can "try to trick" the discriminator, and that the discriminator can be made "more discriminative" until it becomes, in a way, sentient in its "self-doubt" and does not choose to return results. Then, an important next step occurs: The program, either through human intervention or algorithms, is "coaxed" to provide an answer.
In this model, we start to see AI taking an enormous step, from simply modeling data or parsing training sets, to actually making the kinds of high-level decisions that we think of as being in the human domain. In evaluating both the "choice" patterns of the AI discriminator and the "choice" patterns of a human, the KDNuggets piece cites the "Paradox of Choice" pioneered by Barry Schwartz. Some independent blog posts describe how the deep stubborn network is highlighting essentially human behaviors: J. Yakov Stern expounds on the current limitations and possible progress in a lengthy screed on IVR, and Alexia Jolicoeur-Martineau reveals some of the recent results GANs can produce.
So in a sense, the primary impact of deep stubborn networks on AI is to re-orient or expand the research beyond the kinds of decision-making that are easily applicable to enterprise, and to promote groundbreaking research toward making computers even more like humans. There could be any number of applications of this idea to enterprise, but they are not as cut and dried as, say, the current application of machine learning algorithms to consumer recommendation engines, or the use of smart ML processes in marketing. DSN research seems to suggest that we can make AI entities more sentient, which carries with it a good deal of risk, as well as reward.