Issues of cooperation in the field of regulation of computing power, joint insurance of AI products, and determining the social status and protection of persons with a hybrid nervous system (hybrid people)
Dear Colleagues and Friends,
In furtherance of the discussion on the topic of "Key technology policy issues will be grappling with in 2025", let me outline some mullings at the intersection of the theory and practice of computer science, law, and neuroethics.
«Our society and thus each individual has the possibility to (help) decide how the world, in which we want to live with artificial intelligence in the future, should look. Philosophy, law, and technology play a central role in the discourse that has to be conducted for this purpose» (from Fraunhofer IAIS’s report «Trustworty use of AI»).
Essential idea of that proposals - drawing from the deep technical expertise of the computing community, to provide to policy leaders and stakeholders nonpartisan theses on policy gaps in the field of neuroethics, development and implementation of ML/AI tools. It is preferable that this regulation be a harmonious symbiosis of legal norms both at the state level and at the level of socially responsible professional communities.
The approach mentioned in paragraph 6.2 of The CEN-CENELEC Focus Group Report: Road Map on Artificial Intelligence (AI) and, in fact, focused on autonomous self-regulation of AI tools (systems) seems inappropriate to current risks and treats: “An alternative approach is that the system itself ensures that modifications of its functionality due to self-learning have no negative impact on assessment topics like safety or fairness”. Further, in the same place, the potential ineffectiveness of the certification system for AI tools is actually recognized: “For artificial intelligence systems that learn as they are used, the problem is that the behavior of the system changes continuously and will require a new conformity assessment each time.”