Finite state machines (FSMs), are computational models defined by a list of unique set states that can be only picked one by one. In a nutshell, FSMs are simple but elegant solutions to build AI where the machine can only be in one state at any time, and can only switch from one state to another through a transition when an input is received. The most traditional example is a traffic light, that transitions from green to yellow, and from yellow to red after a defined amount of time. In this case, the input is represented by time, but no real AI is involved since the device is completely passive. Only if the traffic light could react to passersby, then AI could be involved.
FSMs are broadly used in the video gaming industry for their inherent simplicity and predictability to support basic but functional AI. For example, they’re largely used in action and RPG games by non-playable characters (NPCs). A relatively simple AI model is built so that a given NPC (usually a foe) can only select a particular behavior — say, attack, flee, defend, detect, etc. They can also be used for main characters, for example when the player gets a power-up or bonus, or to model UI and control schemes in platforming games (to set the crouched state or rapid-fire mode).
FSMs can be used to create realistic simulations of software architecture and communication protocols for cybersecurity purposes. FSM models of vulnerable operations are generated to understand all possible exploits, and let the AI find the best solutions to mitigate them. These simulations are used to test and evaluate security protocols, their robustness, and the security posture of a system. They can be later used to establish cybersecurity policies and best practices.
FSMs have also been used in the field of computational linguistics to build natural language processing (NLP) tools and chatbots with mixed results. Natural human language is, however, full of ambiguities in context that are easily inferred by other humans during real-life conversations (or even while reading a text). FSMs try to parse language with a deterministic approach which is often too rigid to properly handle natural conversations, so statistical inference and decision theories are usually the preferred methods. FSMs still represent a good foundation upon which a simple but efficient NLP AI has been built in the past. In software and applications where dialogs are hard-coded inside source code of a particular programming language, however, FSMs can be used efficiently enough.