Monthly Archives: November 2015

Stoopid AI

stupid AIrt2_Fotor

Most people think Artificial Intelligence (AI) looks like a robot, is smarter than a human, and only speaks the truth.

The entertainment industry would have us believe that AI systems are attractive Caucasian robots with minds of their own like Ava in Ex Machina and Sonny in I, Robot. And that they come in angry versions like the Terminator soldiers and shiny happy servants like C3PO in Star Wars.

ex-machina-2015-11With this connection to humanity, it’s funny that nobody talks about stupid AI.

Be certain, we’re going to have a world of stupid AI, some might be robotic, but regardless of form, their knowledge base will be tilted, biased or just plain wacked. Just like people.

Here’s how it goes down.

In the beginning there will be an AI platform, IBM Watson is a good example of a robust AI you can buy today. Off the shelf Watson knows nothing, not even math, and does nothing except learn. You need to teach it. You teach it by feeding it information.

What information? This is where stupid comes in.

Information, knowledge, and wisdom is very confusing when you try to add any amount of accuracy and reliability; it is downright head breaking when you try to subject it to truth. These are age-old philosophical issues dating to Aristotle, Plato, and Confucius and we still haven’t figured it out.

Many people believe a popular opinion is true, or if it comes from a certain source, it must be accurate. With some social issues, like Celebrity gossip, popularity does rule and proofs are difficult to come by. It gets harder when you contrast, for example, creationism vs. evolution in a literal interpretation vs. science discovery argument.

Let’s feed the AI with the body of knowledge (a corpus of information) built from the billions of words written in favor of creationism. Let’s also add billions of words that align with it and do not contradict it such as Evangelical teachings.

Then start asking this Cognitive Creationist AI some questions.

In the example case of using IBM Watson as the platform, you will get a “probabilistic” answer with 90% confidence that the world was created between 5,700 and 10,000 years ago. Part of the rationale supporting this is that according to Gallup survey’s, 47% of USA adults answered that “God created humans in their present form at one time within the last 10,000 years.”

Not to pick on religion as an easy target, but similar issues are pervasive in “sciences” particularly medicine and nutrition. The science around sugar and high fructose corn syrup has raged on for decades with study contradicting study. Same for medical issues around cholesterol and high blood pressure, their causes and treatment, using diet vs. pharmaceuticals are still subject to divergent facts.

Also in terms of medicine, countries around the world differ on what best practice treatment might be recommended, such as herbal over drugs, or acupuncture over surgery, so a medical AI in China could yield different answers that may or not be stupid.

While not technically stupid, note an AI could be trained with structured information to return biased answers; for example as a new form of paid endorsement and Branding awareness. The answer to “the best, most popular, favorite, cool, great” thing can be taught and also associated with certain other answers and attributes.

  • AI, where is a popular beach destination in January?
  • Answer: The Cancun-brand resort is popular and has a special offer that includes Airline-brand discounts and Branded meal vouchers.
  •  AI, I need more calcium in my diet.
  • Answer: milk is popular but Brand name cheese is more tasty.

With apologies to Forrest Gump, “stupid is what stupid teaches.” Let’s hope we humans are able to ultimately find Intelligence of the non-artificial kind first.

Or maybe my AI will negotiate with your AI for a true fact.

Ask me anything. Click to read more about AI and branding.

Larry Smith

The Interactive Media & Advertising Crash

toy cars crashAdvertising is my friend. I started my career as a MadMan working for Della Femina Travisano and Partners at 625 Madison Avenue, and went on to found US Interactive a digital media and e-commerce development agency.

Only now has it dawned on me that the problem with media & advertising is you don’t know what you’re buying, and you’re never sure you got it.

The Internet was supposed to change that, but instead, it made it worse.

From the earliest days of magazines and newspapers, you knew the ad size and circulation, and even some distribution and location information, but you never knew who, where and when your ad was being consumed and if it did anything.

Radio and Television changed some of that by improving upon where (home) and when (schedule), but gave up who and circulation (called reach), which created the problem of unknown frequency (how often a person viewed the ad).

Digital media and the Internet was supposed to change all that, and it did, but not for the better. We were supposed to know the exact time and place, we would know how to attribute and track actions, and we could predict preferences by observing actions. Accountability and targeting were the promise.

Unfortunately the promise has failed, and the solutions are elusive, though we can boil it down to 3 issues and offer 3 solutions.

The first issue is all about fraud. Server farms and botnets all around the world click on ads into and out of fabricated content sites and pages. Estimates suggest 92% of ads served are not seen, thereby defrauding advertisers of $7.5 billion!

The second issue of buying involves “programmatic” which allows automated systems to bid/ask, then automatically purchase and serve an ad that fits within programmed parameters. While this simplifies buying, it exemplifies the fact you do not know what you are buying, and obfuscates what you actually got; in Knowledge Management and Marketing Theory, this would be characterized as buying an abstraction rather than a specific thing.

The third issue is target market “inference” or the belief you can profile a browser and cookie into a prospect customer because of behavior, context, or availability. We all know how this works: you constantly see ads for products you’ve already bought. I liken it to driving forward by looking through the rearview mirror to chase after a customer that waved at you as you’ve long since passed.

So much for the tragedy, what are the solutions to knowing what you bought and knowing what you got?

The first and easiest solution is to put the people back into the equation. The human factor is more expensive, you pay a premium, and for the most part, can only buy from “premium publishers” who still have a sales staff to package a plan and take your money. Smaller niche publishers can be contacted directly and you should buy a longer-term sponsorship. It might look like you are spending more money for less, but in an industry where 92% of ads might not be seen, the premium of buying real ads to reach real people delivered by real publishers is economically smarter.

The second solution is to become a publisher yourself, using the same tools for creating fresh, original content (articles and video), building audience (e-mail programs), and curating content (save and republish). This is a long-term strategy, it is durable and sustainable, and it enables both brand building and immediate e-commerce opportunities especially when linked to a mobile strategy and apps.

For large advertisers that need scale and efficiencies from automation, the third solution is to abandon most of the ad-tech, which accounts for about one-third of the costs, and channel those funds back into buying “tonnage.” Acknowledge the fraud and waste, but make it up in volume and run big data analytics instead.

Of course mileage may vary for you and your business, but the reality is the Internet environment and technology is not making the digital media and advertising world better.

Accountability is elusive and requires personal hands-on management to succeed. CMD-Y: History.

Ask me anything. Check out some other articles here.

Larry Smith