Zealous marketing departments, capital-hungry startup founders and overeager reporters are casting the futuristic sheen of artificial intelligence over many products that are actually driven by simple statistics — or hidden people.
Why it matters: This “AI washing” threatens to overinflate expectations for the technology, undermining public trust and potentially setting up the booming field for a backlash.
The big picture: The tech industry has always been infatuated with the buzzword du jour. Before AI landed in this role, it belonged to “big data.” Before that, everyone was “in the cloud” or “mobile first.” Even earlier, it was “Web 2.0” and “social software.”
- About three years ago, every company became an “AI company,” says Frank Chen, a partner at Andreessen Horowitz, a leading Silicon Valley VC firm.
- Now, investing in a purported AI startup requires detective skills, Chen says: “We have to figure out the difference between ‘machine learning that can deliver real competitive differentiation’ and ‘fake ML that is a marketing gloss over linear regressions or a big team in the Philippines transcribing speech manually.'”
Plenty of companies rely on one or the other of those tactics, which straddle the line between attractive branding and misdirection.
- For hard tasks, like transcribing audio or scanning documents, humans often step in when AI algorithms fail. Take Engineer.ai, for example, a company that raised nearly $30 million to automate app design — but was secretly making apps using human developers overseas.
- For easier jobs, “AI” may in fact be a shiny term for basic statistics. If you can swap in “data analytics” for “AI” in a company’s marketing materials, the company is probably not using AI.
“It’s really tempting if you’re a CEO of a tech startup to AI-wash because you know you’re going to get funding,” says Brandon Purcell, a principal analyst at Forrester.
- The cycle continues because nobody wants to miss out on investing in — or being — the next Google or Facebook.
- CEOs demand that their companies “use AI,” without regard for how or whether it’s necessary, says Svetlana Sicular, research VP at Gartner.
The tech sector’s fake-it-till-you-make-it attitude plays into the problem.
- Many AI systems are slow to improve and require a good deal of human hand-holding at first, says Andrew Ng, founder of Landing.ai, a startup that helps other companies implement AI.
- “But problems arise when the difficulty of moving to higher levels of automation is underestimated, either by the company or by the broader community,” Ng tells Axios. “Or when the degree of automation at a given moment is misrepresented.”
The confusion and deception get an assist from the fuzzy definition of AI. It covers everything from state-of-the-art deep learning, which powers most autonomous cars, to 1970s-era “expert systems” that are essentially huge sets of human-coded rules.
- Yes, but: The term isn’t going anywhere. So a cautious consumer, investor or CEO has to pay extra-close attention to anything waving the AI banner to determine whether it’s a groundbreaking innovation — or just three kids in a trenchcoat.