“Investing in Artificial Intelligence: a VC perspective” by Nathan Benaich

Josh
9 min readDec 8, 2015

Nathan Benaich is an investor with Playfair Capital, an early-stage VC fund based in London with a strong focus on AI. Prior to Playfair, Nathan earned a PhD in oncology as a Gates Scholar at the University of Cambridge and a BA in biology from Williams College, during which time he published research on technologies to halt the fatal spread of cancer around the body. Read Nathan’s blog here and website here.

Nathan’s (expanded) talking points from a presentation he gave at the Re.Work Investing in Deep Learning dinner in London on 1st December 2015.

TL;DR Check out the slides here.

a. Why now?

It’s my belief that artificial intelligence is one of the most exciting and transformative opportunities of our time. There’s a few reasons why that’s so. Consumers worldwide carry 2 billion smartphones, they’re increasingly addicted to these devices and 40% of the world is online (KPCB). This means we’re creating new data assets that never existed before (user behavior, preferences, interests, knowledge, connections).

The costs of compute and storage are both plummeting by orders of magnitude, while the computational capacity of today’s processors is growing. We’ve seen improvements in learning methods, architectures and software infrastructure. The pace of innovation can therefore only be accelerating. Indeed, we don’t fully appreciate what tomorrow will look and feel like.

AI-driven products are already out in the wild and improving the performance of search engines, recommender systems (e.g. e-commerce, music), ad serving and financial trading (amongst others). Together, therefore, the community has a better understanding and is equipped with more capable tools with which to build learning systems for a wide range of increasingly complex tasks.

b. How might you apply AI technologies into today’s market?

  • Look at the vast amounts of enterprise and open data available in various data silos (web or on-premise). Consider that making connections between these enables a holistic view of a complex problem from which new insights can be identified and used to make predictions. e.g. DueDil,Premise and Enigma attack the market in this way.
  • Leverage your domain expertise and address a focused, high value, recurring problem using a set of AI techniques that extend the shortfalls of humans. For example, online fraud detection (SiftScience, Ravelin) and personal loans (ZestFinance, Kreditech). Here, making predictions from new fraud patterns and applicants with thin files, respectively, quickly becomes an intractable problem for hand-crafted solutions.
  • Have you developed a new ML/DL framework (feature engineering, data processing, algorithms, model training, deployment) that is applicable to a wide variety of commercial problems? Are you productising existing frameworks with additional tooling and providing this packaged solution to end customers? H2O.ai, Seldon and Prediction.io are working in this space.
  • Study the repetitive, mundane, error prone and slow processes conducted by knowledge workers on a daily basis. Consider that where there’s a structured workflow with measurable parameters/outcomes, automation using contextual decision making can help. Gluru, x.ai andSwiftKey take this approach.
  • Interactions between autonomous agents in the physical world rely on contextual sensor inputs (perception), logic and intelligence. Tesla,Matternet and SkyCatch are squarely focused on realising this vision.
  • Take the long view and focus on research/development to take risks that would otherwise be relegated to academia (but due to strict budgets, often isn’t anymore). DNN Research, DeepMind and Vicarious are in this exciting (but risky) game.

More on this discussion here. A key consideration, in my view, is that the open sourcing of technologies by large incumbents (Google, Microsoft, Intel, IBM) and the range of companies productising technologies for cheap means that technical barriers are eroding fast. What ends up moving the needle are: proprietary data access/creation, experienced talent and addictiveproducts.

c. Which challenges are faced by operators and closely considered by investors?

Operational

  • Do you take the longer term R&D route or instead monetise in short term? While more libraries and frameworks are being released, there’s still significant upfront investment to be made before product performance is acceptable. Users will often benchmark against a result produced by a human, so that’s what you’re competing against.
  • Shallow talent pool (150k on Kaggle). Few have the right blend of skills and experience. How will you source and retain talent?
  • Think about balancing engineering with product research and design early on. Working on aesthetics and experience as an afterthought is tantamount to slapping lipstick onto a pig. It’ll still be a pig.

Commercial

  • AI products are still relatively new in the market. As such, buyers are likely to be non-technical (or not have enough domain knowledge to understand the guts of what you do). They might also be new buyers of the product you sell. Hence, you must closely appreciate the steps/hurdles in the sales cycle.
  • How to deliver the product? SaaS, API, open source?
  • Include chargeable consulting, setup, or support services?
  • Will you be able to use high level learnings from client data for others?

Financial

  • Which type of investors are in the best position to appraise your business? Come speak to us :)
  • What progress is deemed investable? MVP, publications, open source community of users?
  • Should you focus on core product development or work closely on bespoke projects with clients along the way?
  • Consider buffers when raising capital to ensure that you’re not going out to market again before you’ve reached a significant milestone.

d. Build with the user-in-the-loop

There are two big factors that make involving the user in an AI-driven product paramount. 1) Machines don’t yet recapitulate human cognition. In order to pick up where software falls short, we need to call on the user for help. 2) Buyers/users of software products have more choice today than ever. As such, they’re often fickle (avg. 90-day retention for apps is 35%). Returning expected value out of the box is key to building habits(hyperparameter optimisation can help). Here are some great examples of products which prove that involving the user-in-the-loop improves performance:

We can even go a step further, I think, by explaining how machine-generated results are obtained. For example, IBM Watson surfaces relevant literature when supporting a patient diagnosis in the oncology clinic. Doing so improves user satisfaction and helps build confidence in the system to encourage longer term use and investment. Remember, it’s generally hard for us to trust something we don’t truly understand.

e. What’s the AI investment climate like these days?

To put this discussion into context, let’s first look at the global VC market. Q1-Q3 2015 saw $47.2bn invested, a volume higher than each of the full year totals for 17 of the last 20 years (NVCA). We’re likely to breach $55bn by year end. There are circa 900 companies working in the AI field, most of which tackle problems in business intelligence, finance and security. Q4 2014 saw aflurry of deals into AI companies started by well respected and achieved academics: Vicarious, Scaled Inference, MetaMind and Sentient Technologies.

So far, we’ve seen circa 300 deals into AI companies (defined as businesses whose description includes keywords: artificial intelligence, machine learning, computer vision, NLP, data science, neural network, deep learning from Jan 1st 2015 thru 1st Dec 2015). In the UK, companies like Ravelin,Signal and Gluru raised seed rounds. Circa $2bn was invested, albeit bloated by large venture debt or credit lines for consumer/business loan providersAvant ($339m debt+credit), ZestFinance ($150m debt), LiftForward ($250m credit) and Argon Credit ($75m credit) (CB Insights). Importantly, 80% of deals were < $5m in size and 90% of the cash was invested into US companies vs. 13% in Europe. 75% of rounds were in the US.

The exit market has seen 33 M&A transactions and 1 IPO (Adgorithms on the LSE). Six events were for European companies, 1 in Asia and the rest were accounted for by American companies. The largest transactions were TellApart/Twitter ($532m; $17m raised), Elastica/Blue Coat Systems ($280m; $45m raised) and SupersonicAds/IronSource ($150m; $21m raised), which return solid multiples of invested capital. The remaining transactions were mostly for talent, given that median team size at the time of the acquisition was 7ppl median.

Altogether, AI investments will have accounted for circa 5% of total VC investments for 2015. That’s higher than the 2% claimed in 2013, but still tracking far behind competing categories like adtech, mobile and BI software. The key takeaway points are a) the financing and exit markets for AI companies are still nascent, as exemplified by the small rounds and low deal volumes, and b) the vast majority of activity takes place in the US. Businesses must therefore have exposure to this market.

f. Which problems remain to be solved? Here are two:

1. Healthcare

I spent a number of summers in university and 3 years in grad school researching the genetic factors governing the spread of cancer around the body. A key takeaway I left with is the following: therapeutic development is a very challenging, expensive, lengthy, regulated and ultimately offers a transient solution to treating disease. Instead, I truly believe that what we need to improve healthcare outcomes is granular and longitudinal monitoring of physiology and lifestyle. This should enable early detection of health conditions in near real-time, drive down cost of care over a patient’s lifetime, while consequently improving outcomes.

Consider the digitally connected lifestyles we lead today. The devices some of us interact with on a daily basis are able to track our movements, vital signs, exercise, sleep and even reproductive health. We’re disconnected for fewer hours of the day than we’re online and I think we’re less apprehensive to storing various data types in the cloud (where they can be accessed, with consent, by 3rd parties). Sure, the news might paint a different, but the fact is that we’re still using the web and it’s wealth of products.

On a population level, therefore, we have the chance to interrogate data sets that have never before existed. From these, we could glean insights into how nature and nurture influence the genesis and development of disease. That’s huge. Look at today’s clinical model: a patient presents into the hospital when they feel something is wrong. The doctor has to conduct a battery of tests to derive a diagnosis. These tests address a single (often late stage) time point, at which moment little can be done to reverse damage (e.g. in the case of cancer). Now imagine the future. In a world of continuous, non-invasive monitoring of physiology and lifestyle, we could predict disease onset and outcome, understand which condition a patient likely suffers from and how they’ll respond to various therapeutic modalities. There’s loads of applications for artificial intelligence here: intelligence sensors, signal processing, anomaly detection, multivariate classifiers, deep learning on molecular interactions…

Some companies are already hacking away at this problem:

  • Sano: Continuously monitor biomarkers in blood using sensors and software.
  • Enlitic/MetaMind/Zebra Medical: vision systems for decision support (MRI/CT).
  • Deep Genomics/Atomwise: learn, model and predict how genetic variation influence health/disease and how drugs can be repurposed for new conditions.
  • Flatiron Health: common technology infrastructure for clinics and hospitals to process oncology data generated from research.
  • Google: filed a patent covering an invention for drawing blood without a needle. This is a small step towards wearable sampling devices.

2. Enterprise automation

Could businesses ever conceivably run themselves? AI-enabled automation of knowledge work could cut employment costs by $9tn by 2020 (BAML). Coupled to the efficiency gains worth $1.9tn driven by robots, I reckon there’s a chance for near complete automation of core, repetitive businesses functions in the future. Think of all the productised SaaS tools that are available off the shelf for CRM, marketing, billing/payments, logistics, web development, customer interactions, finance, hiring and BI. Then consider tools like Zapier or Tray.io, which help connect applications and program business logic. These could be further expanded by leveraging contextual data points that inform decision making. Perhaps we could eventually re-image the new eBay, where you’ll have fully automated inventory procurement, pricing, listing generation, translation, recommendations, transaction processing, customer interaction, packaging, fulfilment and shipping. Of course, probably a ways off :)

g. Wrapping up: here’s my outlook

I’m bullish on the value to be created with artificial intelligence across our personal and professional lives. I think there’s currently low VC risk tolerance for this sector, especially given shortening investment horizons for value to be created. More support is needed for companies driving long term innovation, especially that far less is occurring within Universities. VC was born to fund moonshots.

We must remember that access to technology will, over time, become commoditised. It’s therefore key to understand your use case, your user, the value you bring and how it’s experience and assessed. There is a renewed focused on core principles: build a solution to an unsolved/poorly served high-value, persistent problem for consumers or businesses.

Finally, you must have exposure to the US market where the lion’s share of value is created and realised. We have an opportunity to catalyse the growth of the AI sector in Europe, but not without keeping close tabs on what works/doesn’t work across the pond first-hand.

Working in the space? We’d love to get to know you :)

Note: this article originally appeared on Nathan’s blog here.

--

--