What is real? How do you define 'real'? If you're talking about what you can feel, what you can smell, what you can taste and see, then 'real' is simply electrical signals interpreted by your brain.
Morpheus, “The Matrix”
When I first discovered the use of Statecharts it opened up an opportunity I hadn’t ever considered. A way to tame the cognitive load induced with complex logic by allowing me to visualize the logic with diagrams.
This simple change in how I can approach code design was profound.
I discovered that there were several interpretations on how to implement statecharts in code and I need to explain my own. Especially after seeing some of the over complicated uses I’ve seen.
Simply put―for me―use of state machines is designed to become the brain or nerve center of a piece of business logic. By brain I mean the simile―the containing framework/library/code that uses the state machine is the body performing tasks sending and receiving signals through the nervous system to the brain which doesn’t actually controls anything other then sending and receiving signals.
This analogy drives my usage of XState in my projects. I design the logic visually then create an XState module for the same logic. Then I connect the machine to what ever framework/library I’m using.
What this looks like in practise is that my machines have many named actions/guards/services. Those services are not implemented till the hook up. I can define conditions and side effects in the visualization process. This also means the context remains small, non-complex data, primitive values, and the logic for the context remains with the machine.
One of the features of XState I use is that machines can be created with a partial configuration and be filled in after by the owner.
What I’ve found in doing this is that my machines remain somewhat maintainable. Feedback by others have said that my machine were at least understandable. While in cases where the role of who owns what is confused and the presentation mixes with the business logic most feedback I hear is that it is over engineered, confusing, and impossible to maintain.
How about some examples? My first machine was a Traffic Light. With five states: solid red, blinking red, solid yellow, blinking yellow, and green.
createMachine({ initial: 'solid', states: { solid: { initial: 'red', on: { FAIL: '#blinking.red', CAUTION: '#blinking.yellow', }, states: { red: { entry: 'setColorRed', on: { NEXT: 'yellow' }, }, yellow: { entry: 'setColorYellow', on: { NEXT: 'green' }, }, green: { entry: 'setColorGreen', on: { NEXT: 'red' }, }, }, }, blinking: { id: 'blinking', invoke: { src: 'blinkerRelay' }, on: { NEXT: 'solid' }, states: { red: { entry: 'setColorRed', on: { CAUTION: 'yellow' }, }, yellow: { entry: 'setColorYellow', on: { FAIL: 'red' }, }, }, }, }, });
Notice that this is just the description of the machine. Because it is detached from implementation it affords a great deal of flexibility. In fact it is very easy to diagram this configuration. Try it and copy/paste to stately.ai Visualizer.
It also isn’t to difficult to think that this could be modeled in a different language considering it is just names.
The essence being that the brain which can be described in many different ways―XState config, In a visualizer, SCXML, PlantUML diagram, etc.―focuses on the thinking, the logic, leaving the rest to what ever is listening and sending signals to and from the brain.
Speaking of signals to and from, this separation also has the same benefit for the implementation details. We can plug and play different implementations for the same brain.
Examples of different interpreters for the traffic light machine
const light = document.getElementById('light');
function startBlinker() {
const toggleBlink = () => light.classList.toggle('on');
let interval = setInterval(toggleBlink, 700);
return () => {
clearInterval(interval);
light.classList.add('on');
};
}
const trafficLight = interpret(lightMachine.withConfig({
actions: {
setColorRed: () => light.dataset.color = 'red',
setColorYellow: () => light.dataset.color = 'yellow',
setColorGreen: () => light.dataset.color = 'green',
},
services: {
blinkerRelay: () => () => startBlinker(),
},
})).start();
const log = (...args) => console.log(...args);
const consoleLight = interpret(lightMachine.withConfig({
actions: {
setColorRed: () => log('Set color to Red'),
setColorYellow: () => log('Set color to Yellow'),
setColorGreen: () => log('Set color to Green'),
},
services: {
blinkerRelay: () => () => {
log('started blinking');
return () => log('stopped blinking');
},
},
})).start();
const color = document.querySelector('#text .color');
const blinker = document.querySelector('#text .blinker');
const textLight = interpret(lightMachine.withConfig({
actions: {
setColorRed: () => color.textContent = 'red',
setColorYellow: () => color.textContent = 'yellow',
setColorGreen: () => color.textContent = 'green',
},
services: {
blinkerRelay: () => () => {
blinker.textContent = '(blinking)';
return () => blinker.textContent = '';
},
},
})).start();
function mockConfig(assert) {
return {
actions: {
setColorRed: () => assert.step('red'),
setColorYellow: () => assert.step('yellow'),
setColorGreen: () => assert.step('green'),
},
services: {
blinkerRelay: () => () => {
assert.step('start blinking');
return () => assert.step('stop blinking');
},
},
};
}
module('Traffic Light', function () {
test('cycles the light', function (assert) {
let config = mockConfig(assert);
let machine = lightMachine.withConfig(config);
let subject = interpret(machine);
subject.start();
subject.send(['NEXT', 'NEXT', 'NEXT']);
assert.verifySteps(['red', 'yellow', 'green', 'red']);
});
});
Context
Statecharts have a concept of a context (XState) or data model (SCXML) which is used for tracking non-finite information that can be used for the logic of the machine.
There is a lot of debate concerning what goes into the context of a statechart and I have a strong opinion about this. For me the context is a form of localized memory or, pardon the metaphor, a form of L3 Cache for the state machine.
The Data Model offers the capability of storing, reading, and modifying a set of data that is internal to the state machine.
https://www.w3.org/TR/scxml/#data-module (emphasis mine)
In this same brain/body metaphor this is like the brain keeping track of a count to know when to stop petting a dog or tracking heat signals received by the body to know when to remove a hand from a flame.
I have, however, seen counter arguments for this that subscribe to the idea that the context is a dumping ground to obtrude data between systems. Examples where complex data and non-scalar values are dumped in, the machine mutates, and serves the data back out by exposing the context in each state/event/transition.
I'm here to argue against this concept. My opinion is that the context remain for dealing with non-finite state specific to the machine itself and that if it needs input from outside itself it use guards and if it needs to mutate something outside itself it uses actions. Let the data ownership remain with the system that created the data.
Ideally it should be something that could be serialized so it could be deterministic. Ideally a machine’s interpreted state not be passed to others outside their control.
createMachine({ initial: 'idle', context: { pressCount: 0 }, states: { idle: { on: { PRESS: [ { target: 'boom', cond: 'hasReachedPressLimit' }, { actions: 'incPressCount' }, ], }, }, boom: { type: 'final' }, }, }, { guards: { hasReachedPressLimit: (ctx) => ctx.pressCount >= PRESS_LIMIT, }, actions: { incPressCount: assign({ pressCount: (ctx) => ctx.pressCount + 1, }), }, });
Unlike the use of it as a database of sorts:
// Please don’t try to do something like this interpret(createMachine({ initial: '…', context: { model: null }, on: { MODEL: { actions: 'assignModel' }, }, states: { … }, }, { actions: { assignModel: assign({ model: (_, event) => event.model, }), }, })) .onTransition((state) => { doSomethingWith(state.context.model); }) .start();
Derived value from states
Another pattern I see is the idea of derived value to state changes. This is a case where the state itself caries enough contextual meaning that the system using the state machine have no side effects from the state change. This is most noticeable in small cases where the state name itself is an identifier for another system.
const light = document.getElementById('light'); interpret(createMachine({ initial: 'red', states: { red: { … }, yellow: { … }, green: { … }, }, })) .onTransition((state) => { light.dataset.state = state.toStrings().join(' '); }) .start();
#light[data-state~=red] { background-color: salmon; } #light[data-state~=yellow] { background-color: gold; } #light[data-state~=green] { background-color: lightgreen; }
For the above example it is easy to see the benefits of associating presentation value from the state machine’s current state. But this idea does break down at scale. At some point the lines between what is state derived and what the actual intent of the presentation is supposed to do gets blurry and confused. It is at this point I will arguefor returning to actions. This way we can give a name to what we want―intention revealing names.
Doing this makes it explicit where some side effect or presentation result originates from. You have a place to trap it and manage it as the logic (requirements) change.
Specifically when the result differs from the meaning of the state. A good example of this could be the need to hide an element in one state but show it in another. If we had a simple machine with states called hideThing and showThing then yeah I can see it being a derived idea. But if it is say a form that can be put in the states simple and advanced it doesn't make much sense to associate the two with if/else statements separate from the logic. Instead I would propose adding actions like hideDetails and showDetails.
In this way you know that that part of the form was hidden because the hideDetails was executed and looking at the machine it is easy to see when that action was dispatched. Maybe later on you have a third state intermedate where you need to be more fine grained on what get shown or hidden. Doing so with a huge conditional block would be maddening. Doing so as actions being dispatched mean a clear and visual way to know how and when those results took place.
Conclusion
I hope I’ve demonstrated some ideas that help drive your future statechart endeavours. I think that if we pull back from the implementation details and think about the higher level design of our logic―including visualizations―that we can make much more understandable and maintainable systems. Statecharts have a deal of advantages to Front End development but they do not come without their foot-guns and it is on us to make sure we properly tame such beasts.
A good trick for this is to think about state machines like a brain, detached from any tangible reality but to only receive input signals and send output signals. With this metaphor it makes it clear how to define actions, guards, and services as a named hook. Implementing them inline for internal logic (liek the use of context) and attaching to externally in the case of outside side effects and data ownership.
Doing this make it clear who owns what. Who is responsible for what. In much the same way we separate Model, View, and Controllers I propose that we also isolate state machines from their internal logic and the consumer of the state machine reacting to that logic. Separation of Concerns.