Ai Safety Vision

Importance of AI Safety

As described in AI Safety there are many considerations about safety and predictability of AI that have to be studied more, understood and somehow dealt with. There is some experience in the industry, as well as some best practices, that are worth of and maybe even mandatory to be studied and possibly implemented.

It might be a good idea to study the existing experience more in-depth, research what the possibilities are, and develop an AI-safety policy and vision. Ideally, of course, it would be nice to have interpretability of the NNs and LLMs, and have full understanding of how things work, in order to ensure control over the technology. At the same time, an iterative approach might be a good way to resolve this task, since I assume achieving full explainability of LLMs or sufficient understanding of those, that would enable deployment of high-quality predictable results to production might take quite some time, while implementing iterations of the product with what we have now (fuzzy LLMs) allows to reach at least some visible progress right away.


Examples

Could an interesting example and serve as inspiration for developing something alike for SAFe Portal

Atlassian’s Responsible Technology