“Provider pays” for failed automation services
If your AI works as well as you claim, why not make that a promise?
A reasonable hypothesis for why recent AI has not yet automated away more job tasks is that the systems that are available fail too often, or fail in ways that are otherwise costly. We do not tolerate machines making significant errors and omissions, perhaps to a greater extent than for human mistakes. But even after someone develops a system that is in fact reliable enough to automate a task, their potential customers may not know or believe this fact by default. Where that occurs, both sides would benefit from mechanisms for the seller to credibly promise that the buyer won’t have to worry about costly failures resulting from switching to the automation. That is, assurances can make automation services more profitable by making customers more willing to pay for it.
What is “provider pays for failure”?
One simple form of assurance is what I’ll call “provider pays for failure”. Consider HotelBot, a hypothetical service that autonomously books hotel rooms on behalf of its users in exchange for a fee. Hotel booking is varied enough that it is hard for a potential customer to become confident in HotelBot based entirely on past user success stories, and expensive enough that most potential customers probably wouldn’t be comfortable risking their money to test whether HotelBot can handle their own particular needs. So the provider could make them a promise, “If you use HotelBot to automate your booking and it reserves a room for the wrong date, location, or guest count, I will pay you back at minimum the cost of the room + my fee.” With this in hand, potential customers can try out HotelBot while feeling assured that even if it is not 100% reliable, they are at least protected against its unreliability.
This “provider pays for failure” assurance leverages information asymmetry. It is more attractive the bigger a discrepancy there is between a buyer’s (risk-adjusted) expectations and their true expected losses from failure. It is also more attractive the more difficult or costly it is for the buyer to determine quality by themselves.
Along these factors, I think AI services are an attractive candidate for this kind of assurance. The underlying logic of an AI service is often invisible to the user, whether on accident or by design.1 Given this, the provider is in a much better position to have knowledge relevant to failures, such as how the system was built and what fallbacks are in place. There is plausibly already a large gap between how reliable customers think AI systems are and how reliable the developers think they will soon be. Frontier AI developers have made claims that we will soon see systems that can broadly substitute for human labor to accomplish high-value tasks. Also, as with other proprietary API-based software, in theory the service provider can track their failures (in the lab or in the field) and thereby estimate failure rates much better than the customer can.
Why not “provider pays for failure”?
I can see at least three broad reasons why a provider would refuse to offer these assurances on their automation services, in spite of greater customer willingness-to-pay after assurance. First, the provider may know that their AI system is not yet reliable enough to be worth systematically measuring and covering failures. Second, the provider may know that the failures that they could cover are not very costly to their customers, relative to what it would take to cover them.2 These first two factors should become less relevant over time, as providers in a competitive market for AI services will be trying to figure out how to improve automation and how to automate higher-value tasks.
Third, the provider may be unable to protect their assurances from being abused in some way, by customers who want to make the provider pay out despite their system working as intended.3 However, it seems like many AI service providers should be able to substantially limit the risk from abuse of their assurances. They have several options to do this, some that are generic and some that are specific to AI service offerings. They can try to narrowly tailor the assurances they provide, to reduce the surface area for undeserving claims. They can try to build their system to be more robust to unusual patterns of usage. They can also monitor their AI systems to try to detect whether a user is potentially trying to discover failure-triggering jailbreaks or misusing the system in some other way. They can then suspend these accounts and void assurances on them, potentially requiring a manual human review before reinstating. I think one of these or some similar intervention could plausibly work well enough to make assurances viable.
Closing
To my knowledge, it should not be difficult to implement “provider pays for failure” within automation products/services once the true failure rate is low. Assurances are already provided in other areas as a part of standard contracts such as SLAs and manufacturer’s warranties. Additionally, in some industries there are dedicated insurers that offer businesses coverage against certain kinds of internal failures.4 If the risk of automation-linked failure can be estimated by independent entities, then providers can buy insurance from them on the market rather than covering costs themselves. An important caveat to note is that the above assurances do not address risks that are outside the scope of the agreement, such as risks that the automation may pose to third-parties. Those risks would be better addressed within the framework of liability law, which contract-based assurance schemes are meant to complement.
In both cases, this is useful to know! If an AI system provider refuses to provide assurances, that may give some evidence that their system is not very reliable, and also some evidence that failures are not very costly.