How to Cut Through the Hidden Ads and Marketing During The Early Stages Of AI Adoption

The web is filled with AI research, blogs, platform reviews and testimonials. Look for the Press Release, Sponsored Content, In Partnership With {Company}, “In an interview with {Name} VP of Product/Marketing with {Company}”, “We’ll explore this and more at {Conference}”.

No alt text provided for this image

1 This is what most research pages look like. How can decision makers parse through marketing to find solutions?

Companies hire me to help them build machine learning infrastructure and capabilities, basically to parse through this, but there are biases hiding in the consulting world. Am I recommending a cloud provider because I’ve got a deal with them to get a cut from all introductions? Am I recommending a services provider because it’s right for my client or right for me? I don’t have any deals like these but how does a client know that? This layer of bias is unprofessional to unethical but at least it’s out in the open.

The next layer of bias is unconscious but just as impactful. Am I hiring ML engineers who agree with me or those who will best align with the business objectives and culture? I am an AWS advocate because I’ve used it a ton and it’s comfortable. I also like Microsoft and Azure for MS shops. I exclusively used MS based coding languages for almost 10 years before getting into machine learning. I started coding in C then C++ and am an advocate for both to optimize model training. I like Uber’s offerings combined with TensorFlow Extended (TFX).

I study new offerings. Facebook has some interesting tools and there are more examples than I’m glossing over. My bias pulls me back to what I know works; hesitant to put my name on something new.

I’m also biased against products. I can think of 3 companies I won’t ever recommend. I’ve seen the inside of their ML products, data sourcing and model building process. I’ve saved companies money by steering clients towards what I know works and away from low quality offerings. But have I built the best possible solution or just a workable solution?

The point is bias, unethical or unconscious, is a project risk that needs to be called out and managed. A/B test your machine learning solutions just like you would marketing. Most vendors like Microsoft and Amazon will give you a free tier or trial period and help with your first project. Vendors and service providers can be brought in on short term contracts. Hiring processes can be evaluated over the course of a year. Run two solutions head to head and dive into the productivity and ROI numbers.

Also look at taking pieces from both solutions to build a better long-term process. Prove it works because once an unsupported bias gains traction in the business, it becomes they way we’ve always done things. Inertia to change starts early. Test it -> Prove it cultures are more flexible and less vulnerable to getting saddled with a workable solution versus an optimal solution.

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *