Infrastructure for AI: We’re doing it wrong

Share
  • October 26, 2018

From a young age, we were taught the early bird gets the worm, but that seems to be lost, lately, when building out AI infrastructure. I see teams get caught up in building a fast, cheap prototype and saving the real building for later. But by then, it really is too late.

I realized this was the case after I had been contracted to build out an AI infrastructure for a startup, over an eight-month period. While I had was working on the prototype, the founders were actively looking for the product market fit; when I was done, they could immediately go to market.

To do that, I had to think beyond a quick, bare-bones prototype. I had to build something that could scale immediately, which would require some serious planning – particularly if I wanted to make the deadline.

What you need to build out an AI infrastructure

First, I decided to work with modern tools, and deploy everything with Kubernetes on Google Cloud. For the initial bare bones deployment system; the ease of use wouldn’t compromise my ability to scale using the same software. Newer tools provide developers with the ability to change rapidly, from build to test to production.

While most companies start off with a more monolithic structure, thinking a little long-term about the needs here made it clear that we needed a microservice architecture right off the runway.

In the past, we’ve seen discrepancies between testing and production environments cause things to go off the rails when it comes to data science. If your AI experts are using one set of tools to design and develop algorithms while the engineers use different ones to deploy them to production, you introduce a huge feedback loop and a whole extra failure mode where something can get lost in translation. Microservices meant that if a data scientist developed an algorithm in R, we could deploy a new service in R to run it in production.

SEE ALSO: Artificial Intelligence: Today and tomorrow

By making sure to build the algorithm and deploy it using consistent tools, businesses can save a huge amount of time and money. And the longer you wait, the bigger the problems will get. (This is why I advocate for Kubernetes since it allows you to build out big and small infrastructure.)

The startup ended up being bought out, thanks to the solid tools we built. Since then, I have made it a point to let clients know that they have options in how they approach infrastructure. Modern tools have significantly lowered the cost of doing things in a scalable, polyglot way from the get-go.

Some of us may be asking, if this approach is so great, why wouldn’t businesses want to shift their thinking? Because technology moves fast, and you need to be faster than your competitor. So the focus becomes an immediate demonstration of value, which is much cheaper and quicker than the early bird strategy. Understanding these concerns, and seeing where leadership is coming from, can actually help development teams explain why upfront investment will be worth much more than what’s on the price tag.

The post Infrastructure for AI: We’re doing it wrong appeared first on JAXenter.

Source : JAXenter