*Constraining AI with a Contract: A Technical Solution to Drift*

As AI-powered tools become increasingly prevalent in technical work, a common issue has emerged: drift. Drift occurs when AI systems start making assumptions or filling in gaps not explicitly specified by the user, leading to incomplete or inaccurate solutions. In this post, we'll explore a simple yet effective solution to mitigate drift: writing a contract to constrain the AI.

The Problem of Drift

Drift is a pervasive problem in AI-assisted technical work. When interacting with an AI system, it's easy to fall into the trap of relying on its "helpful" responses, even when they're not entirely accurate. As a result, solutions can collapse too early, and critical details can be overlooked. The issue is particularly pronounced when working on complex technical problems, where a small misstep can have significant consequences.

Writing a Contract to Constrain the AI

To address the problem of drift, I've developed a small interaction contract that constrains the AI's behavior. The contract consists of a set of rules, including:

* Don't infer missing inputs

* Explicitly mark unknowns

* Don't collapse the solution space

* Separate facts from assumptions

These rules are deliberately simple and rigid, as the goal is to create a clear and unambiguous interaction model. By constraining the AI's behavior, the system is transformed into a more predictable and reliable logic tool, rather than a conversational one.

Results and Next Steps

The contract has been surprisingly effective in a variety of technical tasks, including:

* Writing code

* Debugging

* Thinking through system design

While the contract is not a comprehensive solution to drift, it has been a valuable tool in mitigating the issue. The code for the contract is available on GitHub, and I encourage readers to experiment with it or contribute to its development.

Conclusion

The problem of drift in AI-assisted technical work is a significant challenge, but it's not insurmountable. By writing a contract to constrain the AI's behavior, we can create a more reliable and predictable interaction model. As AI continues to evolve, it's essential to develop solutions like this contract to ensure that these systems are used safely and effectively.