There is no Spoon

Scott Francis
Austin Startups
Published in
4 min readOct 1, 2023

--

As software continues to improve — and in particular, the software and hardware that drives machine learning, adversarial machine learning and generative machine learning — we’re going to see more and more articles questioning the role of humans in … well in all sorts of things.

We’ll also see questions about whether we can trust these algorithms to make decisions, as seen below from an interesting post on LinkedIn :

When we allow algorithms to make decisions on our behalf, it can be extremely rewarding. For instance, doctors can make better informed medical decisions based on the analysis and predictions from machine learning algorithms. But at times, algorithmic decision-making can also impede on human autonomy and enhance existing biases.

Taking the time to weigh the risks and benefits of algorithmic decision making can allow developers, users and other stakeholders to understand when they’d like to utilize machine learning technology. Some may like to center their algorithmic development around human ethics and values and place systems for regulations and intervention when things go awry.

Weigh in: How can we strike a balance when allowing machine learning to make decisions for us?

(full article here)

The first thing to realize, is that “there is no spoon.” Apologies to the Matrix.

Allow me to explain. If you’ve automated something with machine learning, you’re not really making a decision at the point in time that the software is routing one way or the other. You made your decision earlier — when you decided to offload the work (decisions) to an automated software program — in much the same way that when you set cruise control on your car at 65MPH, the car isn’t deciding how fast you’ll go, you decided. The car is simply following instructions by increasing or decreasing speed to match that set objective. If the cruise control were smarter and set the speed at the speed limit (on any road), you’re still making the decision to have the car drive the speed limits — the car is just automating the adjustments that keep it within a range around that number.

Example: if we have team members read documents, interpret the data from those documents, and input them into systems — each time we read the data we are making a decision about how to handle it. Is that a 4 or a 7? is that an address or a zip code or a phone number? Is this an insurance claim or an address change or something else? Is this person happy or upset?

If we put machine learning to work, the decision is made at design time, and the rest is just software running. Yes the algorithm might get more efficient with a good feedback loop (or worse if you give it a negative feedback loop). But the software doesn’t make decisions, it does what you’ve designed it to do (even the learning is designed via algorithms, and therefore, what it was designed to do).

On that LinkedIn posts are some great perspectives from other experts n our industry. Ian Barkin considers that without understanding of how this technology works, we’re likely to allow for biased algorithms and to lose an understanding of *why* an algorithm behaved the way it did. He makes a strong argument for human and AI collaboration (“HUMAN+ROBOT collaborations” in his parlance).

Keith McCormick writes: “if end users of the models, whether they be in healthcare, sales, or predictive maintenance, don’t trust the models, they will ignore them or actively try to bypass them. That trust is not automatic — it is earned.” That’s a fantastic perspective to keep in mind as we deploy any new software system — whether AI is involved or not.

I’ll leave you with this thought:

If it can be automated, and it should be…was it a decision? was there a spoon?

one more thing… if you’re a process geek like me, then you might want to listen to this panel on the state of Process Orchestration from CamundaCon. Sandy Kemsley, Annie Talvasto, Regina Degennaro, and Sriti Gupta hold forth on what’s happening in process orchestration and what it means for practitioners.

https://page.camunda.com/camundacon-2022-state-of-process-orchestration-panel

Another gem from CamundaCon is this video of BP3’s Brian Schlosser sharing our progress adapting Brazos Task Manager from Camunda 7 to Camunda 8. He takes us through the journey and how the architectural decisions underlining Camunda 8 actually track really nicely with the natural evolution of Brazos Task Manager.

For those not familiar, we first wrote Brazos Task Manager because our clients had really interesting process work to sort, filter, organize, and execute — but the existing solutions for doing so left a lot to be desired. So we built our own solution — and transformed the experience in the process for our clients. Brazos Task Manager is in use now by some very large process-driven workflows at Fortune 500 clients.

If you’re looking for help adopting Camunda 8, or incorporating process orchestration into the way you think about software application development, you’re in good hands with BP3.

--

--

Co-founder and CEO of BP3, Magellan International School Board, ATC Board. Interested in Tech, Apple, Startups, Austin, Education, Austin Cuisine.