Creativity & Art: What Is Good Enough?
Am I good enough?
What can mathematics teach us?
In our last blog post we briefly discussed Herbert Simon, a remarkable Nobel Prize winning economist who came up with the idea (obvious in retrospect, like almost all good ideas) of ‘bounded rationality.’
What that means is that we are continually making decisions with limited information and limited ability to analyze that information.
This seeming limitation on our decision making capability should not come as a rude surprise to creatives.
How can you know all of the ramifications of an action when some of the possible outcomes haven’t even been created yet?
The Known, The Unknown & The Unknown Unknown
We’ve discussed the known, the unknown, and the unknown unknown in the past.
The known is what our bounded-ly rational fictional person deals with, in tandem with limited ability to analyze that information. In this scenario, we’re shooting from the hip.
The unknown is what they *would* deal with if they had unlimited time and budget to acquire information.
But the unknown unknown requires both an infinite amount of information and an infinite ability to process it to figure out and make sense of that which can’t even be identified as unknown at first blush.
This is a tall order.
But what if it we could actually do that?
Time Enough For Deep Thinking: Time, Immortality, Creativity & Decisions
It would be like being immortal, since being immortal means that there would be time enough to acquire all the knowledge we need and time enough to think about it thoroughly.
And being immortal is the same as being dead, because the eternal becomes the unchanging, and life is all about change.
There is no urgency, everyone has the same infinite knowledge, and there is no need or desire to change anything.
But back to Herbert Simon.
Satisficing & Good Enough
Bounded rationality all sounds kind of dreary and limited, of settling for mediocre, second- (or 131st-) best, scuffed furniture, stale potato chips, dented cars, threadbare clothes, lukewarm coffee.
But is satisficing all bad?
We believe the answer is no, but it will take some explaining.
Satisficing in economics has a distant relative in theoretical computer science called satisfiability.
We intend to get them better acquainted with one other because there are riches to be mined here.
Satisfiability has its roots in the 19th century and the work of George Boole (1815-1864) who wrote a book with the modest title, The Laws of Thought.
He was a philosopher, logician, and mathematician, the epitome of a self-made man who stormed the walls of academe with nothing more than a keen intellect and good shoes, suitable for social climbing. (His father was a shoemaker)
Inventing A New Language: Boolean Logic
He invented what we now know as symbolic logic, lending mathematical rigor to logical arguments previously dependent on natural languages that were ill-suited to the purpose.
It was like trying to communicate about architecture using dance— possible, but fraught. A chiropractor might need to be on hot standby.
George Boole did not invent logic, he just created a concise way of stating and analyzing logical concepts.
Logic goes back to Aristotle, Plato and earlier yet.
But…inventing a new language to describe old concepts is not trivial or derivative.
Language is important.
Just as in artistically creative pursuits, a new language allows one to have one foot in the known and have the confidence to put the other in the unknown, the adjacent possible, and to benefit from and internalize the process.
A specialized language is more than convenient. Imagine trying to express the following with conventional vocabulary and grammar.
The theoretical physicist Stephen Hawking, who wrote the popular science book, A Brief History of Time, was once quoted as saying,
My publisher told me that every equation in a book divides the readership in half.
Though Bruce was tempted to insert an equation or two, he resisted the temptation. He’d be down to one reader (me), or very few, in no time.
The Truth Table
The central aspect of Boolean Logic is the “truth table”.
Since you shouldn’t confront symbolic logic without a good breakfast, We’ll use breakfast as an example to make sense of the truth table above.
My goal is to satisfy my need for breakfast. This depends on what I get to eat.
Let’s say I’m a ham and eggs person. I do not consider breakfast a success until I’ve had ham AND eggs.
Mathematicians love to make abstract symbols for things, not unlike certain artists.
Notice there’s three columns in the truth table.
For our example,
- T stands for True, F stands for False
- the “p” column means “Got eggs?”,
- the “q” column means “Got ham?”,
- and the right hand column, p^q, is my “got breakfast?” column.
All the possible combinations are laid out in the table.
There are four combinations possible, but only one produces the breakfast i want.
The fact that I’m hard to please and have to have ham AND eggs acts as a constraint that reduces four possibilities to one.
Let’s say I was easier to please, and could be satisfied with ham OR eggs.
The truth table would look like this.
I can have a satisfying breakfast with three out of the four possibilities.
Constraint & Possibility
ORs keep doors open, ANDs close them.
ORs keep us in the Land of Yes. ANDs are how symbolic logic registers constraints.
As an artist, you must continually strike a balance between constraint and possibility.
Possibility is akin to unfettered creativity, stream of consciousness mark making, many ideas and so forth.
Whereas, constraint is akin to decision making, consideration and editing.
What we’re talking about here is the dance between the spontaneous and the considered.
Now, back to the truth table.
The Truth Table
The truth table tells us what the result of a certain kind of interaction between logical entities is.
These entities are “atomic” in that they are the fundamental building blocks of logic.
They can only take on two values: True and False. (It is also possible to talk about “Maybe”, but that is beyond the scope of this post)
The sample table tells us how an interaction between “atoms” works.
Isn’t that rather a lot of fuss for something obvious?
It turns out to highlight our old friend combinatorics, which we talked about in terms of creative options in a previous post.
Super Exponential Possibility
If, instead of two “atoms”, we consider larger numbers of interacting “atoms”, the number of unique possible ways to fill out the logic output table (the column on the right) grows enormously fast—“super exponentially” in math-speak.
For five atoms, there are over 4 billion possibilities, and it just goes up from there, very quickly.
We submit to you that many of our logical deliberations involve more than five atoms.
They might involve dozens or hundreds, because life is complicated.
The number of ways that these logical atoms can interact exceeds the number of real physical atoms in the Universe at a point between eight and nine logical atoms.
The question changes from finding the “best” solution to finding one that works reasonably well, as finding the best solution may take a huge amount of effort to search for it.
The Sargasso Sea Of Too Much Possibility
This also brings up the downside of living in the Land of Yes and creating a requirement full of ORs so that most of the vast numbers of possibilities in our huge truth table end up being True.
It’s an undifferentiated Sargasso Sea of too much possibility.
We’re far at sea and all we’ve got is ORs.
Constraints are essential
ANDs tie things together, ANDs create meaning. Below is a puzzle from childhood that shows how a lot of ANDs (ordering the numbers in the puzzle) creates meaning from the words.
But what do we want from logic as humans?
We want things to work out, we want them to be possible.
Whatever the interactions are (the ands and ors) and whatever the atoms are (Trues and False’s), we want to end up with a True as the outcome.
True means that the combination works.
We say that it “satisfies” the interactions between all the logical atoms, whether it be breakfast or something much more complicated.
Whenever mathematicians encounter very large numbers, they either get abstract or they get statistical. Let’s go for statistical in this case.
If we string together logical atoms and interactions like beads on a necklace, picking them out willy-nilly from a box full, a statistical truth begins to emerge.
If there are a lot of atoms and not so many ANDs, it is easy to find a combination of Trues and False’s that satisfy the truth table, meaning the end result is True.
This is the unconstrained Sargasso Sea we referred to above.
The vast majority of combinations, guessed at randomly, satisfy the system.
If we pile on the ANDs without increasing the number of atoms, getting the right combination of Trues and False’s to have the result of True gets harder and harder.
First it’s like finding a needle in a haystack and finally becomes downright impossible.
You end up with an impossibility- essentially having to be in two places at once, or in this case, be both True and False.
Now let’s add one more wrinkle to make this more human and less abstract: Let’s say that each logical atom is worth “points”, just like letters in Scrabble.
Then we could define the “best” solution as not just one that is satisfiable, but one that has the highest number of points.
Scrabble maps onto Boolean logic very readily- letters have to combine in such a way as to produce a True dictionary word. The rules of spelling provides the interactions or constraints.
It turns out that finding the best solution (optimal) in the above points example can be very difficult.
Like so much of life, it’s a search problem.
Maybe we don’t need the best.
Maybe one that satisfies the constraints/interactions is good enough, or perhaps one that satisfies the constraints/interactions above a certain score.
Now we have the connection between logical satisfaction and economic satisficing, particularly if we crassly equate points with money.
George Boole, meet Herbert Simon.
Optimizing Versus Satisficing
The remarkable thing about satisficing versus optimizing is that there can be many ways to satisfice when there is usually only one optimal solution.
The satisficing solutions provide a goody-bag of alternatives.
Perhaps this is why optimizers tend to look worried and satisficers whistle while they work.
If you’re in a situation where the logic is fluid and the constraints are changing, having a big bag of options can be a better deal than having the one best solution of the moment.
The one best solution can be brittle and leave you without options when the earth shifts beneath you.
It can be like standing on top of the Matterhorn.
One step and things get worse fast.
On the other hand, if you’re standing on top of something that looks like the following, there is a good chance that you’ve got “something in the bag” that works (i.e., satisfices) if the rules change.
But you also don’t want your “bag of possibilities” to be too big or you can’t lift it.
Striking A Balance In Your Art
How do you strike a balance between the “best” and “many possibilities”?
As an artist and creative, what do you do when you feel you have “too many ideas?”
Being “good enough” can, in the long run, be better than being “best”.
It gives you a better chance of keeping on keeping on. As such it is related to evolution, but that is best saved for another blog post.
Optimization can be its own worst enemy, particularly in a dynamic and uncertain environment.
With gratitude from my studio to yours,
P.S. I was interviewed yesterday on Instagram LIVE by Sandra Felemovicius. We explored the intersections of art, creativity, science and psychology- all to benefit Feeding America! Catch the IGTV Interview HERE.
You can find Sandra on Instagram @sandrafeleart.