ellipsis flag icon-blogicon-check icon-comments icon-email icon-error icon-facebook icon-follow-comment icon-googleicon-hamburger icon-imedia-blog icon-imediaicon-instagramicon-left-arrow icon-linked-in icon-linked icon-linkedin icon-multi-page-view icon-person icon-print icon-right-arrow icon-save icon-searchicon-share-arrow icon-single-page-view icon-tag icon-twitter icon-unfollow icon-upload icon-valid icon-video-play icon-views icon-website icon-youtubelogo-imedia-white logo-imedia logo-mediaWhite review-star thumbs_down thumbs_up

Human Intelligence vs. Machine Learning and AI – Which is Best and When?

Human Intelligence vs. Machine Learning and AI – Which is Best and When? Nancy Mills

People like scores. They like to know which team won, wins vs. losses, how much weight they lost, their body mass index, or their portfolio’s growth. Business people in particular really like numbers; it’s what made the “balanced scorecard” concept so successful (the framework that added strategic performance measures to traditional financial metrics for a more balanced view of organizational performance).

With all the metrics available and the enormous data collection happening nowadays, there is an obsession in the business world for quantifying everything; but numbers get problematic, indeed deceiving, when they are applied with flawed logic or to topics that shouldn’t be quantified. Sometimes, numbers are misused by employing the wrong metrics, such as an over-simplification of social media effectiveness scoring.

This debate leads to the question of when it’s appropriate to use machine learning and artificial intelligence (AI) vs. human intelligence to identify and solve problems.

I owe special thanks to Dave Snowden and his Cynefin Model for the understanding of problem solving within  a conceptual four-domain framework. This model can help executives sense which context they are in so they can make better decisions and adjust their management style to avoid making costly mistakes.

  • Obvious -- The situation is stable, the cause-and-effect relationship is very clear, and an action will have the same results every time. This is where best practice is suitable.
  • Complicated -- Cause and effect are more separated; experts may weigh several options to develop “good” practice.
  • Complex -- Things are more unpredictable; experimentation and diverse thinking are fostered as solutions emerge. Small efforts can have large consequences or large efforts may have no effect.
  • Chaotic -- Without constraints, action-oriented, and where the new and novel are generated.

Is the problem strictly complicated? Machine learning, which is about reiterative cycles, can very possibly help to solve it. Understanding search engine algorithms and using the results learning to develop good practices for e-commerce optimization is one example.

On the other hand, if your question is complex and involves human elements, such as culture, network influences, or motivation, then you need a sense-making research method that is based in complexity—and human intelligence. Sense-making methodology combines human narrative (qualitative research)  and self-interpretation with quantitative technology. Understanding consumer behavior and the reasons why people make certain decisions is an example of a problem whose solutions come from this methodology.

Computer brain vs. human brain—when do we use which and why?

The Merriam-Webster Dictionary defines intelligence as:

(1): the ability to learn or understand or to deal with new or trying situations: reason; also: the skilled use of reason; (2): the ability to apply knowledge to manipulate one's environment or to think abstractly as measured by objective criteria (as tests); mental acuteness: shrewdness 

I believe there is a current obsession with AI and machine learning that is myopic in that it is only useful in certain contexts.

The human brain is not like a computer, nor is a computer like a human brain. In spite of the fact that computers can perform “neural network” processes, they are inspired by the neurons of the brain but are not self-organizing and adaptive.

As  physicist Richard Feyman said “Nobody knows what we (humans) do or how to define a series of steps which correspond to something abstract like thinking.”

Furthermore, machine learning, which teaches computers to act in ways they are not explicitly programmed to perform, cannot replace human learning. For example, machine learning seeks to eliminate outliers, when in the human world, outliers can be a significant source of innovation.

We live in these incredibly advanced technological times with enormous potential but our understanding and perspective are not up to par with the capabilities. There is a blindness about machine learning, with erroneous predictions by some very prominent people about its omnipotence. Just as for centuries, people thought that Newtonian physics was the truth, we now know through quantum physics that much of the Newtonian perspective is untrue. We will likely discover the lies in the promise of robots and computer algorithms that purport to replace or even explain human behavior.

Perspective is key. The fundamental beliefs that a truth-seeker holds will drive research methods and solutions. If the researcher believes the body is a machine, she may aspire to have bionic humans, with every piece built in a lab. On the other extreme, a shaman will tell you about the laws of nature and that humans must live in harmony with nature and each other.

What about ethical issues or levels of complexity that require human judgment? It is very difficult to imagine robots taking over in those contexts. Ethics are a concept that a computer could never comprehend.

Insights and human intelligence – complexity beyond the computer

Machine learning is perfect for understanding what other machines do, learning algorithms, mechanical processes, etc. but is inadequate at best and terrifying at worst as an actual human replacement—or for understanding human behavior.

A service called Automated Insights (from Wordsmith) uses AI to generate natural language text based off of datasets. It will make observations such as “In the last month, home prices in the Phoenix metro area have fallen. Overall, 3,214 houses were sold in Phoenix over the last 30 days, with Phoenix County leading with 3,032 sales.” But the company name is a misnomer, as insights require human interpretation. Sure, you can program the Wordsmith bots to generate the words “bad news” as they pertain to certain increases or decreases that you predetermine, but only a human can make insightful analyses about the larger macroeconomic factors at play and the role of human emotion in the markets. A bot can never replace your real estate agent.

The language should be more precise; artificial intelligence isn’t “intelligence,” it is mass computation at scale. Can a robot ever be a handyman or a nanny? Those so-called lower-level jobs require immense judgment and the use of discerning knowledge in context. It is a level of complexity not even conceivable in a computer. In spite of a widely publicized announcement in 2016 about a chat bot API that could give people information on request, Facebook scaled back its AI goals this February after it was revealed that chatbots could only fulfill 30% of requests without human agents.

In spite of the work Google, Facebook, and other major tech companies are doing around artificial intelligence and machine learning, some prominent thinkers and scientists have warned again the advancement of AI without human controls around its capabilities. While admitting its potential, they warn that it must also reap societal benefits or risk creating threatening pitfalls. Their jointly signed statement about this technology stressed that “research … should focus not only on making AI more capable, but also on its benefits to society.”

Consumer research powered by big data and analytics can provide extremely detailed outputs about past behavior, but they are not complete and, as I started out arguing, are often misleading because they lack the qualitative “why” (focusing instead on the quantitative “what”). Marketers must never lose sight of the humans in their research, because humans are never just what the (imperfect) numbers show.

Rather than rush to machine learning as the solution for gleaning consumer insights, perhaps society is better served investing in better qualitative research that relies on conversation, nuance, and the human factor … and use the power of AI to augment, rather than replace, human capability. 

Throughout her 20-year career, Nancy Mills has helped major corporations glean insights from global consumer and market research. She has worked with clients including GlaxoSmithKline, Pfizer, Colgate-Palmolive and Philips Consumer Products to identify marketing opportunities and gain significant market share. 

She specializes in bridging technology with business users to find insights at the point of need from masses of data. She earned an MBA in International Management from Thunderbird and a BA summa cum laude in mass communication from Texas State University.

Nancy has delivered numerous presentations to clients and at major industry events, including HAPPI Anti-Aging Conference and Health and Beauty Association Global Expo, on market and consumer trends, competitive analysis and forecasts.

Comments

to leave comments.