Quantcast
Channel: June 2021 – The trusted source of unique, data-driven insights on insurance to inform and empower consumers.
Viewing all articles
Browse latest Browse all 10

Insurance Careers Corner: Q&A with Sunil Rawat, Co-Founder and CEO of Omniscience

$
0
0

By Marielle Rodriguez, Social Media and Brand Design Coordinator, Triple-I

Sunil Rawat

Triple-I’s “Insurance Careers Corner” series was created to highlight trailblazers in insurance and to spread awareness of the career opportunities within the industry.

This month we interviewed Sunil Rawat, Co-Founder and CEO of Omniscience, a Silicon Valley-based AI startup that specializes in Computational Insurance. Omniscience uses five “mega-services” that comprise of underwriting automation, customer intelligence, claims optimization, risk optimization, and actuarial guidance to help insurance companies improve their decision-making and achieve greater success.

We spoke with Rawat to discuss his technical background, the role of Omniscience technology in measuring and assessing risk, and the potential flaws in underwriting automation.

Tell me about your interest in building your business. What led you to your current position and what inspired you to found your company?

I’m from the technology industry. I worked for Hewlett Packard for about 11 years, and hp.com grew about 100,000% during my tenure there. Then I helped Nokia build out what is now known as Here Maps, which in turn powers, Bing Maps, Yahoo Maps, Garmin, Mercedes, Land Rover, Amazon, and other mapping systems.

I met my co-founder, Manu Shukla, several years ago. He’s more of the mad scientist, applied mathematician. He wrote the predictive caching engine in the Oracle database, the user profiling system for AOL, and the recommender system for Comcast. For Deloitte Financial Advisory Services, he wrote the text mining system used in the Lehman Brothers probe, the Deepwater Horizon probe and in the recent Volkswagen emissions scandal. He’s the ‘distributed algorithms guy’, and I’m the ‘distributed systems guy’. We’re both deeply technical and we’ve got this ability to do compute at a very high scale.

We see an increasing complexity in the world, whether it’s demographic, social, ecological, political, technological, or geopolitical. Decision-making has become much more complex. Where human lives are at stake, or where large amounts of money are at stake on each individual decision, each individual decision’s accuracy must be extremely high. That’s where we can leverage our compute, taken from our learnings over the last 20 years, and bring it to the insurance domain. That’s why we founded the company — to solve these complex risk management problems. We’re really focused on computational finance, and more specifically, computational insurance.

What is Omniscience’s overall mission?

It’s to become the company that leaders go to when they want to solve complex problems. It’s about empowering leaders in financial services to improve risk selection through hyperscale computation.

What are your main products and services and what role does Omniscience technology play?

One of our core products is underwriting automation. We like to solve intractable problems. When we look at underwriting, we think about facultative underwriting for life insurance where you need human underwriters. The decision-making heuristic is so complex. Consider somebody who’s a 25-year-old nonsmoker asking for a 10-year term policy of $50,000 — it’s kind of a no-brainer and you can give them that policy. On the other hand, if they were asking for $50 million, you’re certainly going to ask for a blood test, a psychological exam, a keratin hair test, and everything in between. You need humans to make these decisions. We managed to take that problem and use our technology to digitize it. If you take a few hundred data fields, and a few 100,000 cases to build an AI model, it quickly becomes completely intractable from a compute standpoint. That’s where we can use our technology to look at all the data in all its facets — we automate and use all of it.

Once you’ve got an AI underwriter’s brain in software, you think from the customer intelligence standpoint. You’ve got all this rich transaction data from your customers to pre-underwrite, qualify, and recommend them for different products. We’ve also built a great capability in the data acquisition area. For workers comp and general liability, we have the data that improves the agent experience. We can also correctly classify any NAICS codes and can help with claims avoidance and finding hidden risk. We’ve also got a great OCR capability. In terms of digitization of text, we can take complex tabular data and digitize it without any human in the loop. We’re able to do this worldwide, even in complex Asian languages. We also do a lot of work in asset and liability management and can do calculations that historically have been done in a very low-powered, inaccurate manner. We can run these calculations daily or weekly, vs annually, which makes a big difference for insurance companies.

We also work in wildfire risk. A lot of wildfire spread models look at a ZIP+4 or a zip code level, and they take about four hours to predict one hour of wildfire spread, so about 96 hours to predict one day of wildfire spread at a zip code level. In California, where I am, we had lots of wildfires last year. When you double the density of the grid, the computation goes up 8x. What we were able to do is improve and look at the grid at 30 meters square, almost at an individual property size. You can individually look at the risk of the houses. At a 30-meter level, we can do one hour of wildfire propagation in 10 seconds, basically one day in about four minutes.

Are there any potential flaws in relying too much on automation technology that omits the human element?

Absolutely. The problem with AI systems is they may generally be only as good as the data that they’re built on. The number one thing is that because we can look at all the data and all its facets, we can get to 90+ percent accuracy on each individual decision. You also need explainability. It’s not like an underwriter decides in a snap and then justifies the decision. What you need from a regulatory or an auditability standpoint is that you must document a decision as you go through the decision-making process.

If you’re building a model off historical data, how do you make sure that certain groups don’t get biased again? You need bias testing. Explainability, transparency, scalability, adjustability — these are all very important. From a change management, risk management standpoint, you have the AI make the decision, and then you’ll have a human review. After you’ve done that process for some months, you can introduce this in a very risk-managed way. Every AI should also state its confidence in its decision. It’s very easy to decide, but you also must be able to state your confidence number and humans must always pay attention to that confidence number.

What is traditional insurance lacking in terms of technology and innovation? How is your technology transforming insurance?

Insurers know their domain better than any insurtech can ever know their domain. In some ways, insurance is the original data science. Insurers are very brilliant people, but they don’t have experience with software engineering and scale computing. The first instinct is to look at open-source tools or buy some tools from vendors to build their own models. That doesn’t work because the methods are so different. It’s kind of like saying, “I’m not going to buy Microsoft Windows, I’m going to write my own Microsoft Windows”, but that’s not their core business. They should use their Microsoft Windows to run Excel to build actuarial models, but you wouldn’t try to write your own programs.

We are good at system programming and scale computing because we’re from a tech background. I wouldn’t be so arrogant to think that we know as much about insurance as any insurance company, but it’s through that marriage of domain expertise in insurance and domain expertise in compute that leaders in the field can leapfrog their competitors.

Are there any current projects you’re currently working on and any trends you see in big data that you’re excited about?

Underwriting and digitization, cat management, and wildfire risk is exciting, and some work that we’re doing in ALM calculations. When regulators are asking you to show that you have enough assets to meet your liabilities for the next 60 years on a nested quarterly basis, that becomes very complex. That’s where our whole mega-services come in — if you can tie all together your underwriting, claims, and capital management, then you can become much better at selection, and you can decide how much risk you want to take in a very dynamic way, as opposed to a very static way.

The other things we’re excited about is asset management. We are doing some interesting work with a very large insurer. What we’ve been able to do is boost returns through various strategies. That’s another area we’re excited about — growing quite rapidly in the next year.

What your goals are for 2021 and beyond?

It’s about helping insurers develop this multi-decade compounding advantage through better selection, and we’re just going to continue to execute. We’ve got a lot of IP and technology developed, and we’ve got pilot customers in various geographies that have used our technology. We’ve got the proof points and the case studies, and now we’re just doubling down on growing our business, whether it’s with the same customers we have or going into more product lines. We are focused on serving those customers and signing on a few more customers in the three areas where we are active, which is Japan, Hong Kong, China, and North America. We are focused on methodically executing on our plan.


Viewing all articles
Browse latest Browse all 10

Trending Articles