I found myself with some extra time today after a missed connection in San Francisco on my way back from Seattle.
It seems that an early morning fog caused a hold at the SFO airport that prevented flights from landing (quite common in the Bay Area, I was told). That event resulted in my flight from Seattle being parked on the runway until the hold was removed. Consequently, our connecting flight to Orlando was missed and the best re-booking possible was a red-eye flight that will arrive tomorrow around 7am.
As I contemplate the events that left me stranded at the airport for an extra 7 hours, I am reminded about one of my favorite pet peeves. The apparent loss of common sense!
Having heard the explanations on the plane about the situation and the options (or lack thereof) for resolving the problems created for many of the passengers, it struck me how the same technology that is employed to provide solutions, can also be at the root of many of our problems.
Let me explain further. Most humans are endowed with a wonderful gift of reasoning that we call “common sense.” Although this gift is present in most people, few actually use it, based on personal observation. However, when it comes to machine based systems (aka computers and the software that is analogous to “reason” in them) common sense is not an option. After all, software programs employed in business are designed to produce outcomes that are repetitive, and cost-effective, in order to yield a profit.
Though I don’t have the empirical evidence to prove this, it seems to me that the larger a company is, the more the automated system appears to have real “control” over the possibilities and outcomes in a given situation. That can be a shame, and may even produce results that could be counter-productive to the long-term objectives of the business.
Take the example I witnessed on the plane this morning. Not far from my seat were a few passengers that were traveling to Amsterdam, but connecting through SFO. It appeared that they were going to be very close to making their connecting flight, but still outside the boarding window, which meant that they would miss their flight. I heard an attendant explain that the airline had just recently revamped their software system so that all schedule problems were “automatically” resolved by the system which now decides which flights to hold, and which ones will be missed.
During the entire flight those passengers were anxious and continued to ask if there was a way for someone to place a hold on their flight for the few minutes that would be required to allow them to make their scheduled connection, but the attendants repeatedly answered that the new system made the decisions, and there was no way to override it. They continued to assure them that the system took into consideration all of the factors in making its decision. If they were forced to wait, it would be as a result of the system deciding that it was the best possible outcome for all conditions and variables that existed.
I don’t know if those passengers made it on their scheduled flight to Amsterdam, or if they were forced to wait (as I was) for another flight. However, the experience prompted me to consider the following questions:
- Was the software system correctly programmed to include all relevant criteria to be able to make a best case decision?
- Did the program place any weight on the long-term customer satisfaction (or dissatisfaction) that would result from the decision?
- Was there a way to provide the means to expedite communication of changes to affected passengers so they would not be subject to undue anxiety for a long period of time?
It seemed to me (and probably many of the passengers on my flight) that there was no longer a human factor in making these important decisions as there had been in the past. I wonder if the gains in profitability (if any) that result from these system-based decisions are worth the consequences that arise from the human results that accompany them.
This led me to think about automated systems in general, and their use in the construction industry in specific. I believe our industry suffers from the same dilemma; the more technology is in control, the greater the risk that our outcomes may lack common sense!
Think about the practice of BIM, for example. While the concept is quite noble and should produce excellent results, its execution is often poor due to the lack of coordinated designs, poor quality of the 3D models, and delayed delivery of design (often after construction has progressed for a while).
Or, how about the elaborate CPM project schedules that are used on most jobs? What is the likelihood that the logic between related activities is correct? What is the accuracy of the status information on updates to the schedule? How accurate is the Critical Path that is rendered by the system based on those inputs? I’m sure you could add a few more examples of this problem to the two that I’ve listed. Let me be clear, my point is NOT that technology is not good, should not be used, or produces bad results. Rather, it is simply that you should be careful when you select and apply the technology-based tools that seem to be offered to you as the “solution” for the problems in nearly all aspects of your business.
As you select and use these systems, you should ask yourself many “common sense” questions. Finally, when you enter data into your system, you should be mindful of the old saying; “garbage in = garbage out.”
“Man is still the most extraordinary computer of all.” – John F. Kennedy
If you would like to learn more about ways to reduce your risk in construction projects, order my book Document to Reduce Risk. It explains how to apply the “rules” from the contract for better project management. You’ll also find numerous examples to help you prepare sound construction documentation to address typical project conditions. You can obtain a print, or e-book copy by clicking the image on this page.
With my best regards to all of you, until my next issue, Paco.
© Farach Consultants, Inc. • all rights reserved • 954.579.5058