In my recent client work around dig­i­tal trends well under­way, from pre­dic­tive ana­lyt­ics and Big Data, arti­fi­cial intel­li­gence, inter­net of things (IoT) to every­thing con­nect­ed, it’s not a mat­ter of try­ing to slow them down (no one can), but ask­ing an impor­tant deep­er ques­tion: is there any moral or eth­i­cal com­pass help­ing to guide them? Can com­pa­nies or orga­ni­za­tions effec­tive­ly frame or respond appropriately?

In Jan­u­ary, The White House pub­lished a report out­lin­ing, like many reports we’re see­ing, what some of the longer-term con­se­quences of this dig­i­tal accel­er­a­tion might be over the next fif­teen to twen­ty years, including:

  • 83% of the jobs where peo­ple make less than $20 per hour will be sub­ject to automa­tion or replacement,
  • Between 9% and 47% of jobs are in dan­ger of being made irrel­e­vant due to tech­no­log­i­cal change, with the worst threats falling among the less edu­cat­ed, and
  • Between 2.2 and 3.1 mil­lion car, bus and truck dri­ving jobs in the U.S. will be elim­i­nat­ed by the advent of self-dri­ving vehicles.

These are grim pre­dic­tions to some com­pa­nies that are not prepar­ing fast enough. But, it is very excit­ing to those who see big oppor­tu­ni­ties in this transformation.

But, let’s step back for a minute and look hard at today’s dig­i­tal land­scape unfold­ing.  First, writ­ing algo­rithms to pre­dict or help devel­op machine learn­ing is not an objec­tive art. Devel­op­ers writ­ing new code are being guid­ed by a company’s lead­er­ship, some­times explic­it or implic­it in nature (and, pos­si­bly very lit­tle guid­ance) – and the intend­ed and unin­tend­ed con­se­quences of all these pre­dic­tive ana­lyt­ics under devel­op­ment, both for a com­pa­ny and com­bined with oth­ers, is yet to be deter­mined. Cathy O’Neil’s book, Weapons of Math Destruc­tion, more elo­quent­ly makes the case from her years of work and research at Har­vard, MIT, Berke­ley and Colum­bia, and makes two strong state­ments about algo­rithms and data sci­ence: algo­rithms are noth­ing more than opin­ions embed­ded in code, and, there is no such thing as an objec­tive algo­rithm, because, at the very least, the per­son build­ing the algo­rithm defines success.

If you take this a step fur­ther, con­sid­er a Wired magazine’s recent book review in March titled “Tech Big­wigs Know How Addic­tive their Prod­ucts Are. Why don’t the Rest of Us?” about Adam Alter’s new book, Irre­sistible ( In the arti­cle, Alter talks about how the tech giants behind our cur­rent tech­nol­o­gy – Steve Jobs, Chris Ander­son, among oth­ers inter­viewed imposed strict reg­u­la­tions on their children’s use of tech­nol­o­gy. Why? It is addic­tive – and designed to be so. In the arti­cle, Tris­tan Har­ris, a “design ethi­cist,” inter­viewed, sug­gests “the prob­lem isn’t that peo­ple lack willpow­er; it’s that there are a thou­sand peo­ple on the oth­er side of the screen whose job it is to break down the self-reg­u­la­tion you have.”

So, what does this mean? There is cur­rent­ly lit­tle mon­i­tor­ing or help­ing com­pa­nies think through the tech con­se­quences of such rapid dig­i­tal accel­er­a­tion on us, our chil­dren and our soci­ety. It’s the wild, wild west again, but with the globe being the emi­nent domain end­point for tech lead­ers and pio­neers. Is the only answer for us to be like the tech titans and strict­ly try to enforce some guide­lines on our own use?

Remem­ber Enron? The smartest guys in the room back in the late 1990s? An impor­tant point to pon­der is not only the secu­ri­ties fraud they com­mit­ted, but the ener­gy futures trad­ing they were invent­ing in those ear­ly days – and, way ahead of com­peti­tors and the Secu­ri­ties and Exchange Commission’s (SEC) under­stand­ings for reg­u­la­tion. Ener­gy futures have since trans­formed much of our ener­gy trad­ing and under­stand­ings of what’s pos­si­ble in the ener­gy land­scape. Today’s tech titans and entre­pre­neurs in this dig­i­tal accel­er­a­tion are, I would argue, at this excit­ing, ear­ly stage of devel­op­ment – with no reins in sight.

Is too much to ask for com­pa­nies in the midst of their dig­i­tal devel­op­ment plans to ask some impor­tant moral and eth­i­cal ques­tions about the future they are design­ing for? Does it ben­e­fit more than hurt oth­ers and our plan­et? I remem­ber ask­ing a new phar­ma company’s sci­en­tists years ago about whether their phar­ma inno­va­tions and patents, drawn from Ama­zon­ian plants, were done in a sus­tain­able man­ner, involved local peo­ple, and the ben­e­fits shared back with the locals? The sci­en­tists were stunned with my ques­tion and had no response.

In my recent dig­i­tal strat­e­gy and sce­nario plan­ning work with com­pa­nies around self-dri­ving cars, smart cities and infra­struc­ture, IoT for real estate prop­er­ties, among oth­ers, I am help­ing lead­ers and teams ask many of the same ques­tions. The good news: more are lis­ten­ing than ever before and are con­cerned enough to address these ques­tions in their plan­ning. The bad news: the dig­i­tal accel­er­a­tion train has left the sta­tion – and it’s accel­er­at­ing fast.